Security Now 1012 transcript
Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show
0:00:00 - Leo Laporte
It's time for security now. Steve Gibson is here. We're going to talk about that malware that crept into the Apple App Store just a couple of weeks ago. The UK says Apple has to put a back door in its encryption Steve's opinion on that. And we'll talk about how common it is for schools in the United States to hide the fact that they've been ransomware and why they do it. It's all coming up next on Security Now Podcasts you love From people you trust. This is Twit. This is Security Now with Steve Gibson, episode 1012, recorded Tuesday, february 11th 2025. Hiding school cyber attacks. It's time for Security Now. The show. We cover the latest security news, privacy news, encryption news and sprinkle in a little bit of stuff about science fiction and TV shows and whatever else this guy right here is into at the moment. Steve Gibson, our polymathic master of ceremonies. Hi, steve.
0:01:15 - Steve Gibson
Hello, my friend, it's great to be with you again. Now we're out of the binary episodes. We had 1-0, 1-0. Well, we had, of course, 1,000. Then we had 1,001. Then we had 1,010, 1,011. Now we're at, unfortunately, until we get to 1,100, which is not that far.
0:01:36 - Leo Laporte
No, it's only 88 episodes away.
0:01:38 - Steve Gibson
We could do it. I think we're going to still be here.
0:01:41 - Leo Laporte
That's less than two years. We will definitely be here for 1,100.
0:01:44 - Steve Gibson
Until then, back to decimal. So I ran across while I was just catching up on news and we've got a bunch of interesting news ran across an entity I'd never heard of called the 75. The 75 is 75. The 75 is, it's as in, 75 million, which is the number of kids in K through 12, you know, lower education schools, and so this is an organization that basically represents their interests and it's nonpartisan, it's straight down the middle it's's, I mean, and they have a code of ethics that their reporters follow that this country has with hidden educational school cyber attacks, which are going on at a much higher rate than is believed, because it turns out there are actual coaches that are quite busy coaching educators about how not to reveal the fact that their school has been compromised and, of course, this has consequences downstream for the kids whose personal data and disciplinary records, family problems, emotional health, all kinds of personal information is out and the educators are denying it.
Anyway, I know that a lot of our listeners are parents of school-age kids. I've had feedback from them through the years about this and there was a lot to say about this and there was a lot to say about this. So we're going to spend half of the podcast, going through this detailed piece of investigative journalism, because, you know, I thought, well, can I summarize this somehow? But what I came away with after reading the entire thing was a truly deep understanding of the dynamics of this and, unfortunately, I'll be concluding drawing some conclusions which I think everyone will be able to follow by the time we get through this. So that's our main topic for the day, and if it's something that's not of interest to anybody, well, fine, stop listening after you get to the first hour and a half After you hear all the ads, steve, after you hear all the ads, you can stop listening.
Oh, that's right After the last ad. We'll let you know. You can leave now. You can leave the classroom, children.
0:04:38 - Leo Laporte
You can leave anytime.
0:04:39 - Steve Gibson
Students cybersecurity. But we're going to talk about first SparkCat, which I know you guys talked about over on MacBreak Weekly. Sparkcat is the name of the secret stealing AI image scanner which has been discovered in the app and the Play stores Also. I also saw you mentioning the UK's new demands about Apple doing the impossible, and we're going to touch on the whole advanced data protection issue. We talked about it when Apple first announced it. Uk's back again.
France is also moving forward on legislation to require backdoors into encryption and, as I've been saying now for a couple years, this is the great question, right? This is why we're glad we went past 999, because how do you solve this? And I'm going to reiterate the solution that I think exists, which also diffuses the arguments about why government needs this stuff, needs this stuff. Uh, firefox has moved to um, their number 135 and it has a bunch of useful new features which I don't see all the time, but if you hold, I think it's control, shift x, uh up, pops, chat, gpt, so uh, and and yeah, they've added chatting to the uh, the, an optional sidebar in Firefox, and a bunch of other stuff we'll talk about Also. The Five Eyes Alliance has published their guidance for edge device security, which I know our listeners who are involved in enterprise environments will like, because it's both a checklist and a CYA for them, because it's both a checklist and a CYA for them.
Six Netgear routers contain three, each in three sets CVSS 9.6 and 9.8 vulnerabilities unauthenticated remote code execution. You want to make sure you don't have any of these six routers and you're only vulnerable if you also didn't follow the guidance and you've got remote admin enabled, which of course I'm sure none of our listeners have. But if you did, oh boy. And one of our favorite classes of utilities, those from sysinternals. It turns out that most of them allow malicious Windows DLL injection and apparently Microsoft doesn't care. We'll look at that more closely. Google has removed restrictive do-gooder language from Fuzzer, which has successfully jailbroken the most powerful and supposedly guardrail-equipped ChatGPT-03 model. We're going to look at all that. And then we're going to end by examining the well and deliberately hidden truth behind ransomware, cyber attacks on the US's K-12 schools and of course, we have a picture for the ages Leo.
This is one where it's like okay, I think I mentioned it actually either last week or the week before the nominee for the 2025 Darwin Awards.
0:08:11 - Leo Laporte
You know I love the Darwin Awards. I haven't thought about them in a long time.
0:08:16 - Steve Gibson
I think we remember the one where someone like strapped a skateboard to a jet engine or something.
0:08:21 - Leo Laporte
I mean, there was like what they're called the Darwin Awards, because most of the people who do these experiments do not survive, and it is the survival of the fittest, or at least the least risk-taking Good. I can't wait. I haven't looked at it yet. We'll look at it together in just a moment.
0:08:40 - Steve Gibson
One of those where it takes a minute to sort of parse the picture and then you think OMG, oh, I can't wait, I like, I love the puzzle ones, that's good, all right, standby.
0:08:51 - Leo Laporte
That's coming up in just a second, but first a word from our sponsor. You may remember, uh, after the national public data broker breach, steve had a tool that you could go and you could look and see what you know, because this revealed the social security numbers and private information of millions, hundreds of millions of americans. And he had a little tool that you could go and look up your name and see if you were in there. And steve and I both were in there. Our social security numbers both had been revealed in the breach sigh. But then I did something interesting. I said what I. What about Lisa, my wife and our CEO? Not a trace. And I realized why because she's been using Delete Me. We needed it.
If you've ever searched for your name online, you will be dismayed I don't recommend it by how much of your personal information is available online. It is a goldmine, and not just for advertisers and marketers, but for bad guys as well. Uh, it is a source of harassment. Uh, spear phishing attacks. That's what happened to us. People were using lisa's phone number and her name texting her direct reports because they knew their phone numbers. Uh, with spear phishing attacks. You know, I'm in a meeting right now, too busy, but I got to get these Amazon gift cards. Just go, order a thousand of them, use your Twit credit card and send them to this address. Thank you very much. Fortunately, our staff's smart, but I wasn't going to sit back and take that. It told me something that the bad guys could find a lot of information about our company with these data brokers. Same thing true of your family. Deleteme's got personal plans, corporate plans and family plans, which will help you ensure that everyone in your family feels safe online.
Deleteme reduces the risk from identity theft. That's a big part of it. Cybersecurity threats like those spear, phishing attacks, harassment and more. There's just no reason for your data to be online. Delete me will go out. They'll find all that. Their experts will find and remove your information from hundreds of data brokers. They have a list, they know them all and of course, here's the problem. Those data brokers will sure they'll delete that they're required to, but then the data still comes in and your dossier gets rebuilt. Plus, there's new data brokers every single day. It's a very lucrative business. So Deleteme will go back out and periodically rescan and remove that information again, so that, like Lisa, your stuff is just not there and, as I said, they will continue to remove your information regularly so this stuff does not pop back out. I'm talking and, by the way, what's out there, what those data brokers collect, it's everything. I mean, it's just basically everything Addresses, photos, emails, relatives, phone numbers, social media, property values and more.
Do what we did Protect yourself, reclaim your privacy. Join. Delete mecom. Slash twit. Offer code twit gets you 20 off. Join. Delete mecom, slash twit. Use the promo code twit for 20 off. We thank them so much for their support of security. Now time for our picture of the week. Now, I'm not. I want to scroll up and then, after I do, I will look at it for a minute and then I'll let you all nominee for the darwin awards. Oh my god, that's got to be staged. If it's not, then, uh, let's do a little requiem for this guy yeah so, uh, I understand the need to party.
0:12:28 - Steve Gibson
I certainly do so we start with a very large, uh, inflatable backyard swimming pool. Yeah, uh, looks like it could hold about 50 people. Uh, so you know, big blue, uh, perimeter filled with water. Now, apparently, these guys didn't want to get out of the pool in order to I don't know enjoy some grilled meat, it's a party in the pool.
0:12:58 - Leo Laporte
They got the beers. They've actually taken a table. They put a table in the pool.
0:13:03 - Steve Gibson
Yes, yes, and now the problem is that, rather than using briquettes of any sort, they thought well, let's just. You know, we don't have a, a uh, a wood burning grill. We have an electric griddle. We got an electric grill. Yeah, now we need to get power to this grill, but the cord's not long enough to reach from the grill to past the perimeter of this round backyard pool, so we need to use an outlet strip which in order to bridge from the grill to outside of the pool.
0:13:41 - Leo Laporte
But these guys are very safety conscious. They know you can't submerge that strip. Oh no, no that would not be good.
0:13:48 - Steve Gibson
No, so they took some floating flip-flops some, some showers, showers, slippers, some sandals and they. They stuck them on each end of the power strip to float them in the middle of the pool. Meanwhile, we see the cord from the power strip going over to the edge where it meets some sort of a looks like a a wood block like a door, a door wedge uh connected to the cord.
0:14:22 - Leo Laporte
Maybe that's so that it will float if it falls in. This is ridiculous. So just out of curiosity, what would happen if that power strip fell in the pool?
0:14:38 - Steve Gibson
Okay, now to the credit, giving credit to our listeners. I finished putting the show notes together yesterday late afternoon so I did the mailing. It went out to 16,100 of our listeners who have subscribed to the weekly mailing. So I've had feedback about this picture since then. Many of our listeners said well, you, you know, they're in a plastic, they're in a, they're in a plastic rubber pool. It's insulated, they're not connected to ground, they're not grounded, they're not.
0:15:14 - Leo Laporte
Yes, that's right it's very much like strip. It probably has a fuse.
0:15:18 - Steve Gibson
Well, it's very much like uh one one listener drew the analogy to the bird that lands on the high tension wire. It lands on the wire. It doesn't turn into barbecued bird immediately, you know. It doesn't know what's going on. Maybe its teeth hurt a little bit, I'm not sure, but basically it's going to survive. The problem would be assuming that this weird sandal power strip thing capsizes in the pool. It would just blow the fuse. I mean back at the house, a fuse would blow.
0:15:54 - Leo Laporte
The coffee pot would turn off in the kitchen. But you know, what could be a trouble is if, at the same time as it sank, somebody stepped out of the pool and had one leg in the pool. Oh, and one leg on the ground.
0:16:08 - Steve Gibson
Then you're the bird that lands, straddles two wires, and we remember what happened with Wile E Coyote on the Roadrunner. It turns into a little poof, just a little shadow of its former self, little crispy sticks that then drops down to the bottom of the ground.
0:16:26 - Leo Laporte
So I want to see more, uh, more darwin award pictures so what you need to do basically burke has probably been electrocuted many times in the studio. He says the wire is the ground, so maybe it I mean it could just go right back to the ground through the wire, right? Well?
0:16:43 - Steve Gibson
now this guy who's further away, who seems to think this is quite entertaining. He's smiling. It looks like he's pushing the edge of the pool down.
0:16:53 - Leo Laporte
Also a mistake.
0:16:54 - Steve Gibson
Like, yeah, and so if any water is running out past him that's running to ground and he could be helping to close the circuit. Yeah, okay, running to ground and he could be helping to close the circuit. Yeah, uh, being struck people have survived being struck by lightning, apparently.
0:17:11 - Leo Laporte
uh, again, their teeth hurt uh so not throw a toaster in the pool I'm just saying no or a grill don't, don't do that, actually don't do any of this kids.
0:17:21 - Steve Gibson
Uh, bad idea. Thus the darwin award, although it looks like these people probably already have reproduced, so this won't be limiting their future ability to propagate this foolishness.
So, as we know, the United States has shunned the Russian cybersecurity firm Kaspersky over understandable, if unfair, concerns of the potential for Russian influence, which would be truly devastating if Kaspersky were to ever turn malicious, given how much of their software was phoning home from inside the US. Again, it's unfortunate that this happened. Kaspersky is nevertheless continuing to contribute their important security research to the world, and their publication last Friday about their discovery of a new Trojan which they've dubbed SparkCat S-P-A-R-K-C-A-T. Sparkcat, which they've dubbed SparkCat S-P-A-R-K-C-A-T. Sparkcat is another example of Kaspersky's continuing value to everyone, even though we don't want to trust them anymore, which, as I said, is too bad. So I want to share the details of their discovery, because it should put everyone on notice of the way malware is evolving, and I'm sure that the fact that it illustrates the potential for the abuse of Microsoft's own screen scraping recall technology will not be lost on any of our listeners, who dislike the idea of having their PCs constantly being scraped and archived, because that's sort of the way this thing works. Kaspersky's piece is titled SparkCat Trojan Stealer Infiltrates App Store and Google Play. Steals Data from Photos when they follow that with the tag, we've discovered apps in the official Apple and Google stores that steal cryptocurrency wallet data by analyzing photos.
So here's what they published last Friday, and you know, it strikes me this is sort of the evolution of the earlier cryptocurrency stealers, which were monitoring people's clipboards. Right, they were, like, you know, pulling the Windows clipboard because it would be like it would be natural for a Windows user who is wanting to send some cryptocurrency somewhere. You know, it's like you know you're paying somebody through Bitcoin and they say here's our address, send us X Bitcoin. And so, you know, the addresses are crazy. So you copy that address with your mouse. You know, hit control C to copy it. Then you go over to your Bitcoin wallet where you want to send Bitcoin to an address, and you paste it. Well, you don't even at no point do you try to like steady that gibberish of a Bitcoin address. You just blindly copy and paste. Of course, as we know cryptocurrency stealers first generation ones they would watch for the arrival of a Bitcoin address and quickly substitute their own so that the address you pasted was not the address you had copied, and you ended up sending them the money that you were intending to send somewhere else. And then you know after a while you contact the original group and say, hey, where's my thing that I ordered? And they go, where's the money that you were supposed to send? And you say I sent you the money. It was like, well, no, anyway, bad guys got it. So here's the evolution of that.
Kaspersky said may contain photos and screenshots of important information you keep there for safety or convenience, such as documents, bank agreements or seed phrases for recovering cryptocurrency wallets. All of this data can be stolen by a malicious app such as the SparkCat Stealer we've discovered. This malware is currently configured to steal crypto wallet data, but it could be easily repurposed to steal any other valuable information. And again, it's like it's one thing to be careful about not putting like like taking a photo of something with your phone, despite the fact that we're told how secure that is, but it is really creepy if you know your Windows desktop was doing that continuously, they said. The worst part is that this malware has made its way into official app stores, with almost 250,000 downloads of infected apps from Google Play. Although malicious apps have been found in Google Play before, this marks the first time a Steeler Trojan has been detected in the App Store. How does this threat work and what can you do to protect yourself?
Apps containing SparkCat's malicious components fall into two categories. Some, such as numerous similar messenger apps claiming AI functionality, all from the same developer, were clearly designed as bait. Some others are legitimate apps food delivery services, newsreaders, crypto wallet utilities, they said. We don't yet know how the Trojan functionality got into these apps. It may have been the result of a supply chain attack, where they broke into the app's developers and injected their library, they said, where a third-party component used in the app was infected. Alternatively, the developers may have deliberately embedded the Trojan into their apps.
The Steeler analyzes photos in the smartphone's gallery and, to that end, all infected apps request permission to access the user's photos. In many cases, this request seems completely legitimate. For example, the food delivery app Cum Cum requested access for a customer support chat right upon opening this chat, which looked completely natural. Other applications request gallery access when launching their core functionality, which still seems harmless. After all, you do want to be able to share photos in a messenger right to be able to share photos in a messenger right. However, as soon as the user grants access to specific photos or the entire gallery, the malware starts going through all the photos it can reach, searching for anything it might find valuable. To find crypto wallet data among photos of cats and sunsets.
The Trojan has built in optical character recognition module based on the Google Machine Language Kit. You know the Google ML Kit, a universal machine learning library. Depending upon the device's language settings, sparkcat downloads models trained to detect the relevant script in photos, whether Latin, korean, chinese or Japanese, so multilingual. After recognizing the text in an image, the Trojan checks it against a set of rules loaded from its command and control server. Thus its function, what it's doing when it finds things, can be varied on the fly, they said. In addition to keywords from the list, for example mnemonic, the filter can be triggered by specific patterns, such as meaningless letter combinations in backup codes or certain word sequences in seed phrases. The Trojan uploads all photos containing potentially valuable text to the attacker's servers, along with detailed information about the recognized text and the device the image was stolen from. So it's serving sort of as a front-end filter. It doesn't want to swamp these nefarious creators, the developers, with everything. You know, every photo in everyone's phone who downloads it. So it does you know, upfront filtering to determine if anything is interesting, no sunsets and cat pictures.
They said we identified 10 malicious apps in Google Play and 11 in the App Store. After notifying the relevant companies and before publishing this, all malicious apps had been removed from the stores. The total number of downloads from Google Play alone exceeded 242,000 at the time of analysis, and our telemetry data suggests that the same malware was available from other sites and unofficial app stores as well. Judging by SparkCat's dictionaries, it is trained to steal data from users in many European and Asian countries, and evidence indicates that attacks have been ongoing since at least March of 2024. So this is coming up on a year old. The authors of this malware are likely fluent in Chinese. More details on this, as well as the technical aspects of SparkCat, can be found in the full report on SecList, so that you know which is the secure list is where Kaspersky posts all of their technical stuff. So under how to protect yourself from OCR Trojans, they write.
Unfortunately, the age-old advice of only download highly rated apps from official app stores is a silver bullet no longer. Even Apple's App Store has now been infiltrated by a true info stealer, and similar incidents have occurred repeatedly in Google Play. Therefore, we need to strengthen the criteria here. Only download highly rated apps with thousands or, better still, millions of downloads Published at least several months ago. Also verify app links in official sources, such as the developer's website to ensure they're not fake. And read the reviews, especially the negative ones, they said you should also be extremely cautious about granting permissions to new apps. Previously, this was primarily a concern for accessibility settings, but now we see that even granting gallery access can lead to the theft of personal data. If you're not completely sure about an app's legitimacy for example, it's not an official messenger, but a modified, like enhanced version don't grant it full access to all your photos and videos. Grant access only to specific photos and only when necessary. Storing documents, passwords, banking data or photos of seed phrases in your smartphone's gallery is highly unsafe.
0:28:40 - Leo Laporte
Don't do that.
0:28:41 - Steve Gibson
Just start with not doing that yes.
Get a password manager? Yep, they said. Besides stealers such as SparkCat, there's also always the risk that someone peeks at the photos or you accidentally upload them to a messenger or file sharing service. Such information should be stored in a dedicated application. To your point, leo, exactly that. Should be stored in a dedicated application. To your point, leo, exactly that. And they said. Finally, if you've already installed an infected application, the list of them is available at the end of the SEC list post. Delete it and don't use it until the developer releases a fixed version. Meanwhile, carefully review your photo gallery to assess what data the cyber criminals may have obtained, change any passwords and block any cards saved in the gallery.
Although the version of SparkCat, we discovered hunts for seed phrases. Specifically, it's possible that the Trojan could be reconfigured to steal other information. As for crypto wallet seed phrases, once created, they cannot be changed. For crypto wallet seed phrases, once created, they cannot be changed. Create a new crypto wallet, transfer all your funds from there and completely abandon the compromised one we should say that apple deleted has presumably killed well, I don't know if they've retroactively killed the.
0:30:02 - Leo Laporte
yeah, they killed the apps in the store, but maybe they didn't use the kill switch. They have a kill switch to delete apps that are unsafe like that, and Kasper's point is that, even if they did, it's worth it.
0:30:16 - Steve Gibson
I mean, and the good news is, these are not mainstream apps you know, come Come, which is some Chinese food delivery, food delivery service okay it's. It's you know, it's not something that a lot of people are probably going to have. On the other hand, 242 000 people had downloaded apps that had this in it from google play, and so so kaspersky's point is it's worth auditing the photos that you have to see what they may have gotten and get proactive, because if you've got a bunch of Bitcoin and you're storing your recovery phrases in a photo in your photo library, first of all bad idea.
Photo in your photo library first of all bad idea. But secondly, you know, if you were to find that in your photo library, good idea to just create a new wallet and move everything over there and just don't do that again.
0:31:17 - Leo Laporte
Now if somebody stole the password or the recovery phrase from my wallet. I would really appreciate it if you'd just let me know.
0:31:26 - Steve Gibson
Yes, you would call it a commission. Yes, exactly. Okay, so I linked to Kaspersky's full technical report. For anyone who wants to dig into this more deeply, I'll go one step further than Kaspersky has in my advice. Just as is true with today's web browsers, whose users have demanded openness in the form of browser add-ons, the same openness has been demanded and received from mobile phone manufacturers. Unfortunately, there are bad guys in the world who profit from victimizing others.
The other thing we've seen is that, despite the best efforts of those managing the add-ons that are available for our browsers and our phones, malicious applications still manage to sneak in. The good news is that one thing we've seen over and over is that the least secure and malice-prone applications are typically again as I'd call them. I guess I would call them sort of gratuitous additions. They're apps that everyone can live without, so their victims tend to be people who download anything that comes along that looks even remotely interesting. I mean, they've completely lost control of their phones. They don't know where any of their apps are. They just scroll endlessly trying to find an icon for something that they're looking for.
You know, they have no appreciation for the fact that there is a non-zero chance that the creator of any given app may have malicious intent or may not, but used a malicious library without knowing it.
The point is non-zero, which tells us just the law of statistics and probability and numbers. The more apps you have where each one has a non-zero chance of being a problem, the greater the total problem, and it only takes one in order to create a leak. So my advice is always to keep this in mind when deciding whether you really need the app you're considering. And, because our devices manufacturers have done everything they can to give us the tools to restrain what apps can do, even after they're resident in our devices, be parsimonious with the access permissions that apps are granted and I know this is tricky, since apps will be cleverly designed to need the permissions they wish to abuse. Since apps will be cleverly designed to need the permissions they wish to abuse, but at least always question their need. And, leo, it just is a fact that we're seeing arguably more of this today than when this podcast began 20 years ago Because there's more money to be had.
Right yeah, Everything's on our phones nowadays and more devices available. I mean, everybody has one. I see people, you know everybody sitting in a restaurant is staring at their individual phones. I don't know how people don't fall off the curb.
0:34:45 - Leo Laporte
They're walking down the sidewalk staring at their phones.
0:34:48 - Steve Gibson
I think what is it? What are you doing?
0:34:51 - Leo Laporte
It's amazing. Well, yeah, they're desperately avoiding any boredom or feelings or knowledge of the world. It's narcotic, they're narcotizing themselves. I do think, and you said something really important of the world. They're, they're, they're, uh, it's, they're, it's narcotic, narcotic, they're not, yeah, narcotizing themselves. Uh, I, I do think, and you said something really important. I really would underscore this and I always said it on the radio show install the fewest possible apps, you know, don't, yes, on your desktop, on your laptop, on your ipad, on your phone. The fewer the apps, the better, because every app raises the specter of a security flaw or just a bug also we are seeing the built-in apps slowly subsuming right the functionality I I used to use.
0:35:39 - Steve Gibson
I had a really cool perspective correction app that I that I Now it's built in. There was never an edit button in the beginning for photos. Now you push edit, you can rotate them, you can fix the perspective, do all this kind of stuff, so you don't need third-party apps to do that.
0:35:59 - Leo Laporte
You probably on an iPhone or a good Android phone like a Google Pixel, could get away without any apps. I think that's really the case and that would be a lot safer, for sure.
0:36:11 - Steve Gibson
I should mention one of your guests I think it might have been Alex mentioned 4K F-O-R-E-C-A.
0:36:20 - Leo Laporte
Yeah, we've been talking about that for a long time. It's the best weather app I absolutely I.
0:36:27 - Steve Gibson
Just I've been wanting to mention to our listeners I don't. Is it multi-platform?
0:36:30 - Leo Laporte
Is it available on Android. It's everywhere, yeah.
0:36:33 - Steve Gibson
It is so good. They asked me after a year, I think. They said can we have a little more money? I said yes, because it, you know, I want them to keep it the way it is. Want them to keep it the way it is anyway, it's that it's short for forecast f-o-r-e-c-a. I'm I, it's an app. It made me think of it because, you know, I'm not, I don't think apple's weather thing it's pretty bare bones.
0:36:58 - Leo Laporte
Holds a candle to forecast.
0:36:59 - Steve Gibson
It's pretty bare so there's an example of something where you really okay, yeah, but you can trust these guys, yeah.
0:37:05 - Leo Laporte
I think you trust them. A lot of weather apps actually use forecast the back end, I think my carrot weather is using or at least you can choose forecast back end and you can look at satellite and radar, and it does take some getting used to.
0:37:20 - Steve Gibson
It is very information dense and it's also customizable. What things you do care about and you don't. I don't care about wind that much, so I take some space on my screen. Get rid of wind, but I do want rainfall, and boy to know what time of day something's going to happen. Anyway, just an unsolicited note that I've been meaning to mention it because I just keep really liking it they're on the web.
0:37:47 - Leo Laporte
They're on the google play store, the uh apple store. The web looks like they hit. Yeah, you can use the web version big screen, yeah, nice, and it has the uh the uh videos and everything too. Yeah, it's really pretty cool. I like it. Look at that you.
0:38:03 - Steve Gibson
If all you need is a green screen, steve and you can do your own weather report yes, I don't okay, I was going to say something off color, but no, no, no I'm not a weather girl, that's good but on that note, let's take a break and then we're going to come back and talk about the uk's new demand for apple's encrypted data oh, I do really want to hear what you have to say about that.
0:38:28 - Leo Laporte
We knew it was coming, it just it's finally happened, right?
0:38:31 - Steve Gibson
yes, well, this you know, it's like they they keep trying right they just like they. They keep hitting this immovable wall um. I have an idea, though.
0:38:41 - Leo Laporte
Oh, now I'm liking it. Stay tuned. Steve has an idea. But first a word from our fine sponsor, thinks Canary. We've been talking about them for years. They are really a great product.
In a nutshell, thinks Canary is a honeypot that is ultimately configurable. You can deploy it in minutes, no coding on your part, but it can be anything from a Windows server, a web server, a SCADA device, an SSH server. Mine's a Synology NAS and it impersonates these. I mean it's really nicely done. My Synology NAS has a Mac address. That's a Synology Mac address. It's got the login screen, just like DSM-7. I mean, it really looks real. In fact the hacker is going to look at it and go, oh, that's not a setup, that's the real deal, I'm going to get in there. That's what's so great about the Things Canary.
If someone is accessing your fake internal SSH server or, by the way, the canary can make files that you can spread around as well PDF, excel, docx, give them provocative names, employee information, dot, ss, excel or something and if a bad guy or a malicious insider attacks those files or tries to brute force your fake SSH server, your thinks canary is immediately going to tell you you got a problem Because there's no way anybody's going to access that by accident. That's somebody who's trying to steal from you. No false alerts, just the alerts that matter. And, by the way, you get them any way you want. It supports syslog. Of course they have an API, but also webhooks text messages, email, you can have a phone call, I bet. I mean it's really flexible, very smart. So you, you get the things canary. It looks like a usb, external usb drive, just kind of a little black thing. You connect into the wall and to your ethernet. Okay, then you choose, you go to the console on your, on your browser. You choose log into your account, choose a profile for your device, then you register with the hosted console and so now that console is going to monitor, it's going to handle the notifications, it's going to get them out to syslog if you want, wherever you need it, any way you want it. Webhooks they support. Then just sit back. If an attacker is inside your network or there's a malicious insider, they will make themselves known because they will inevitably access your things. Canary, they don't look vulnerable, they look valuable. Bad guys can't resist the things canary.
Visit canarytoolstwit. Some big banks may have hundreds of them. A small operation like ours, a handful. I'll give you an idea. This is what it costs us For,500 bucks a year.
Five things to canaries. Okay, you get your own hosted console, all the upgrades, all the support, all the maintenance, and they're very responsive, they're very helpful. And I'm going to save you even more because if you use the code TWIT in the how did you hear about us box they love us they will give you 10% off, not just for the first year, but forever. As long as you have a subscription, 10% off. And if you're at all nervous because I understand maybe this is the first time you're hearing about this, although we've talked about it an awful lot you should not worry because you can always return your ThinkScanary. They have a two-month, a 60-, 60 day money back guarantee for every penny. All right, full refund. I should point out that in the eight years that we have been doing ads for thinks canary, no one has ever claimed the refund literally zero. Because once you get it in there, you go oh yeah, I don't know how I live without it. It's a brilliant idea.
Visit canarytools slash twit. Use twit in the how did you hear about us box. You get 10 off for life. Canarytools slash twit. Love this product. Think you should check it out. Honestly, there's no risk, why not? You could be the first one to return it. You won't, though. We know. You won't, though we know you won't. It's such a good idea. Canarytools slash twit. All right, steve, I'm very curious. They call it the Snoopers Charter, you know.
0:42:53 - Steve Gibson
Yeah, the Investigative.
0:42:54 - Leo Laporte
Powers Act.
0:42:56 - Steve Gibson
So last Friday the news broke that the United Kingdom was demanding that Apple provide access to its users' cloud data, was demanding that Apple provide access to its users' cloud data. I received links from our listeners to stories of this in the Register, the Guardian and the BBC. These reports were all picking up the news, which was first reported in the Washington Post, and the Post provided the best coverage of all. So let's turn to the source for the whole story. So here's what we know from the Washington Post's reporting. They said security officials in the United Kingdom have demanded that Apple create a backdoor allowing them to retrieve all the content any Apple user worldwide has uploaded to the cloud. People familiar with the matter told the Washington Post Okay, so again, all the content any Apple user worldwide has uploaded to the cloud. Good luck with that, but this is what they say they want.
The British government, writes the Post, undisclosed order issued last month, requires blanket capability to view fully encrypted material, not merely assistance in cracking a specific account, and has no known precedent in major democracies. Mark a significant defeat for tech companies in their decades-long battle to avoid being wielded as government tools against their users. The people said, speaking under the condition of anonymity to discuss legally and politically sensitive issues. Rather than break the security promises it made to its users everywhere, apple is likely to stop offering encrypted storage in the UK, the people said. Yet that concession would not fulfill the UK's demand for backdoor access to the service in other countries, including the US. The Office of the Home Secretary has served Apple with a document called a Technical Capability Notice, ordering it to provide access under the sweeping UK Investigatory Powers Act of 2016, which authorized law enforcement to compel assistance from companies when needed to collect evidence.
The people said when needed to collect evidence. The people said the law known by critics as, as you said, leo, the Snoopers Charter makes it a criminal offense to reveal that the government has even made such a demand. An Apple spokesman declined to comment. After all, they can't reveal that. Apple can appeal the UK capability notice to a secret technical panel, which would consider arguments about the expense of the requirement, and to a judge who would weigh whether the request was in proportion to the government's needs. But the law does not permit Apple to deny complying during an appeal, meaning you can't use the appeal to delay the order, which, of course, means that the information that the UK would want would already be in their possession. Even if Apple were to win the appeal, it'd be too late. So this is a mess. In March, writes the Post, when the company was on notice that such a requirement might be coming. So almost a year ago it told Parliament quote there is no reason why the UK government should have the authority to decide for citizens of the world whether they can avail themselves of the proven security benefits that flow from end-to-end encryption. Unquote.
The Home Office said Thursday that its policy was not to discuss any technical demands. Their spokesman said quote we do not comment on operational matters including, for example, confirming or denying the existence of any such notices. In other words, no comment. Senior national security officials in the Biden administration had been tracking the matter since the UK first told the company, apple, it might demand access and Apple said it would refuse. It could not be determined whether they raised objections to Britain. Trump, white House and intelligence officials also declined comment. One of the people briefed on the situation a consultant advising the United States on encryption matters said Apple would be barred from warning its users that its most advanced encryption no longer provided full security. The person deemed it shocking that the UK government was demanding Apple's help to spy on non-British users without their government's knowledge. This is really important. It includes us, yes, yes, and a former White House security advisor confirmed the existence of the British order. So in their reporting, the Washington Post, you know, did their due diligence and they got multi source confirmation that this is all happening and has happened. At issue, they finish, is cloud storage that only the user, not Apple, can unlock.
Apple started rolling out the option which it calls advanced data protection in 2022. From the FBI during the first term of President Donald Trump, who pilloried the company for not aiding in the arrest of quote killers, drug dealers and other violent criminal elements. Unquote. The service is an available security option. We're talking about advanced data protection for Apple users in the United States and elsewhere, while most iPhone and Mac users do not go through the steps to enable it because it's not enabled by default, the service offers enhanced protection from hacking and shuts down a routine method law enforcement uses to access photos, messages and other material. Icloud storage and backups are favored targets for US search warrants, which can be served on Apple without the user knowing so.
And just for the record, remember, it's often not a question or choice about whether you want ADP enabled. I'd love to have it enabled. About whether you want ADP enabled I'd love to have it enabled, but I cannot. In fact, the more faithful and loyal a user is to Apple, the less likely it is they'll be able to enable advanced data protection. I just double-checked as I was preparing the notes. On Sunday, I tried to enable it. On Sunday I tried to enable it, I was provided with a list of six older but still in use by me Apple devices that would need to be running a newer edition of iOS or iPad OS than they're capable of running. So ADP is a non-starter for me, since I still use those older and still working Apple devices every day.
Anyway, the Post continues saying technologists, some intelligence officers and political supporters of encryption reacted strongly to the revelation. After this story first appeared, senator Ron Wyden, a Democrat on the Senate Intelligence Committee, said it was important for the United States to dissuade Britain. He said quote Trump and American tech companies letting foreign governments secretly spy on Americans would be unconscionable and an unmitigated disaster for Americans' privacy and our national security. Meredith Whitaker, of course, who we know is the president of a nonprofit encrypted messenger Signal, said quote using technical capability notices to weaken encryption around the globe is a shocking move that will position the UK as a tech pariah rather than a tech leader. Position the UK as a tech pariah rather than a tech leader. If implemented, the directive will create a dangerous cybersecurity vulnerability in the nervous system of our global economy. Now, she didn't say they would pull out, but we know they would. She has previously said that when the EU was was rattling their sabers, eu was rattling their sabers.
Similarly, law enforcement authorities writes the Post around the world have complained about increased use of encryption and communication modes beyond simple phone traffic, which, in the United States, can be monitored with a court's permission and, as we know, can also be monitored without the court's permission by China. The UK and FBI, in particular, have said that encryption lets terrorists and child abusers hide more easily. Tech companies have pushed back, stressing a right to privacy in personal communication and arguing that backdoors for law enforcement are often exploited by criminals and can be abused by authoritarian regimes. Most electronic communication is encrypted to some degree as it passes through privately owned systems before reaching its destination. Usually, such intermediaries as email providers and internet access companies can obtain the plain text if police ask, but an increasing number of tech offerings are encrypted end-to-end, meaning that no intermediary has access to the digital keys that would unlock the content. That includes Signal Messages, meta's WhatsApp, which, as we know, is based on Signal and Messenger, whatsapp and Messenger, both from Meta and of Apple's iMessages and FaceTime calls. Often, such content loses its end-to-end protection when it's backed up for storage in the cloud. That does not happen when Apple's advanced data protection option is enabled.
Apple has made privacy a selling point for its phones for years, a stance that was enhanced in 2016 when it successfully fought a US order to unlock the iPhone of a dead terrorist in San Bernardino, california, to scan user devices for illegal material. I'll mention that again in a second. That initiative was shelved after heated criticism by privacy advocates and security experts, who said it would turn the technology against customers in unpredictable ways. Google would be a bigger target for UK officials because it's made the backups for Android phones encrypted by default since 2018. The backups for Android phones encrypted by default since 2018, google spokesman Ed Fernandez declined to say whether any government had sought a backdoor, but implied none had been implemented. He said quote Google cannot access Android end-to-end encrypted backup data, even with a legal order. Meta also offers encrypted backups for WhatsApp. A spokesperson declined to comment on government requests, but pointed to a transparency backdoor access potentially prompting Apple to withdraw the service rather than comply, and of course, that's what everyone thinks they'll do.
The battle over storage privacy escalated in Britain is not entirely unexpected. In 2022, uk officials condemned Apple's plans to introduce strong encryption for storage. A government spokesperson told the Guardian newspaper, referring specifically to child safety laws. Quote end-to-end encryption cannot be allowed to hamper efforts to catch perpetrators of the most serious crimes unquote. Of the most serious crimes unquote. After the home office gave Apple a draft of what would become a backdoor order, the company hinted to lawmakers and the public what might lie ahead. During a debate in Parliament over amendments to the Investigatory Powers Act, apple warned last March that the law allowed the government to demand backdoors that could apply around the world. In a written submission, apple stated these provisions could be used to force a company like Apple, that would never build a backdoor into its products, to publicly withdraw critical security features from the UK market, depriving U, depriving UK users of these protections. Apple argued that when wielding the act against strong encryption would conflict with a ruling by the European Court of Human Rights that any law requiring companies to produce end-to-end encrypted communications quote risks amounting to a requirement that providers of such services weaken the encryption mechanism for all users and violates the European right to privacy.
Finally, in the United States, decades of complaints from law enforcement about encryption have recently been sidelined by massive hacks by suspected Chinese government agents who breached the biggest communications companies and listened in on calls at will. In a joint December press briefing on the case by FBI leaders. A Department of Homeland Security official urged Americans not to rely on standard phone service for privacy and to use encrypted services when possible. We mentioned that at the time. Also, that month, the FBI, the NSA and CISA joined in recommending dozens of steps to counter the Chinese hacking spree, including quote ensure the traffic is end-to-end encrypted to the maximum extent possible. Unquote. Officials in Canada, new Zealand and Australia endorsed the recommendations. Those in the United Kingdom did not. Okay.
So the Washington Post's report correctly noted, and as we analyzed after its architecture was published, apple has properly implemented true end-to-end encryption for every one of its cloud-based services where its use is feasible. As such, only the user's various iOS and iPadOS devices contain the key that's required to decrypt the contents of the data stored and shared in the cloud. To decrypt the contents of the data stored and shared in the cloud, everything transiting to and from the cloud is, as we used to say, pie, p-i-e, pre-internet encrypted and cannot possibly be accessed by anyone with access to either the data stored or in transit. The data could only be encrypted or decrypted on the user's device, and the key can never be removed from the user's device. So we're back here once again with the UK, demanding something that none of the providers of secure messaging or secure storage will be willing to accommodate.
But there's been a recent change that promises to provide the long sought after solution to at least part of this problem, At least one of the reasons that everybody like that. The bureaucrats and politicians are saying they need this and that's for the children and that's AI. Back in 1964, as part of a ruling about pornography, us Supreme Court Justice Potter Stewart famously said I may not be able to define it, but I know it when I see it. When I see it, I see no reason why AI, functioning as an autonomous angel perched on every iOS user's shoulder, should not be able to stand in for Justice Stewart. This AI would not need to contain the library of known CSAM child sexual abuse material. You know the hashes for which users refused to have preloaded into their devices, feeling that this awful stuff was somehow in their phone. Instead, an AI would be trained to recognize such images.
We know that Apple devices are already actively performing some of this nanny function. They are already empowered to warn their underage users when they may be about to send or receive and view any imagery that might be age inappropriate for them, and this is all that any far more capable AI-enabled monitoring system would need to do. What's significant is that it would not need to prevent the device from capturing and containing whatever content its user may wish to have. Parents can still take photos of their own kids in the bath, can still take photos of their own kids in the bath. The system simply needs to filter out and prohibit the device's communication, its reception or transmission of any such content that could potentially be subject to abuse, and once such filters are in place, there will be no need to gain access to anything stored in the cloud, because there will be no way for anything abusive to leave or be received by any Apple device.
Given the history of government abuse of surveillance powers, many argue that the urgency to protect the children is just a smokescreen, behind which lies a thirst for wider surveillance that could be turned, as it has been elsewhere, onto political rivals and other non-juveniles. So having companies like Apple, signal, meta and others deploy local AI to lock down the content which their systems would refuse to send or receive short circuits any government attempt at overreach, and one of the best things about such solutions is that their effectiveness is so readily tested. Just present an AI-protected device with some test content that should not be communicated in order to verify that it's doing its job. So I really I could see this. You know, the world is all abuzz about AI. We're understanding how capable it is. It seems easily possible that a local competent AI image recognition system could perform filtering functions on individual users' devices devices.
1:02:53 - Leo Laporte
Well, yes, I guess I mean, yeah, I think people it wasn't merely that they didn't want the key, the hashes on there. I think they just don't like the idea of that kind of scanning going on of any involvement of any kind yeah, yeah, but maybe you know if it's a trade for that to uh encryption. I think it's obvious. They want everything. This c-sam is just the pretext. They, they want everything, yeah yeah, um, also I.
1:03:16 - Steve Gibson
As I mentioned at the top, france is doing something similar. Um, uh, an article appearing in intelligence online carried the headline France makes new push for backdoors into encrypted messaging apps. Senators have passed an amendment paving the way for intelligence agencies to access backdoors into messaging apps such as WhatsApp, signal and Telegram and presumably you know, imessage. But what we believe is there are no such backdoors, so that would be requiring them to, you know, to compel their creation. It's going to be interesting, leo, to see what happens. You know, is Apple what going to say to the UK? Well, we're, you know, we're going to not offer any encryption in the UK, but, as we know, that's not what the UK is demanding. They're demanding access to anywhere, any user anywhere, like you know, demanding the end of encryption right.
1:04:31 - Leo Laporte
Basically, yeah, the thing that we don't know and probably will never know is because this is secret. I mean, the uk has not admitted to it, as you said. Nobody, it's just it was a leaker. I would imagine they've also sent similar requests to signal whatsapp.
1:04:50 - Steve Gibson
Uh, google right, just hasn't leaked, right because why would they target apple and apple's not even the majority platform? Google and android are the larger platform right.
1:05:00 - Leo Laporte
why stop at apple, so that I mean there will be no refuge except for doing something, a roll, your own kind of a thing, if you really wanted end-to-end encryption?
1:05:09 - Steve Gibson
And, as we've said, if encryption is outlawed, only the outlaws will be using encryption.
1:05:14 - Leo Laporte
Yeah, people with incentive, because, frankly, very few people use advanced data protection you found. One of the things that stopped me is you know you have to have everything up to date, but also you lose some capabilities, and there's this whole big risk of losing all of your data too, if you forget your password. Um, most people don't use it, so I mean the people who are most motivated to use encryption who are criminals, of course well, no, not, but criminals are among those are going to find ways, so this isn't going to have any effect. It's 1984 is what it is. It's a bit depressing.
Okay, another break and then we're going to talk about Firefox 135.
1:05:57 - Steve Gibson
Okay, and a bunch of new features.
1:05:59 - Leo Laporte
Yes, sir, I'll start downloading it right now, but before I do, let me tell you about our sponsor for this segment of Security Now Zscaler, the leader in cloud security. You've heard us talk about zero trust. This is the way right. This is the way forward. Enterprises have spent billions of dollars on firewalls, perimeter defenses, right, and VPNs so that you can get through the firewall and get to work. But breaches, they're not going away. They rise every year. 18% increase in ransomware attacks in 2024. $75 million record payout in 2024. And that's probably only the tip of the iceberg, as we're going to learn later.
A lot of companies and others are reluctant to say oh yeah, we paid the bad guys. They don't want to say it. We know this is going on. It's just getting worse and worse. The problem is that these perimeter defenses actually expand your attack surface in order to make it possible to vpn, and they have to get public facing ip addresses. Bad actors now have something to hang their hat on and they can get in even more easily than ever before because they're using ai to crack these. You know security, frankly, flawed security perimeters. Oh, and there's another problem, because vpns and firewalls enable lateral movement the minute you're in the network. The assumption is oh, he's, he's, he's an employee, he's a good guy, so they'll connect you to the entire network. So now there's lateral movement a bad guy. Once they're in, they can move anywhere and start exfiltrating stuff like your corporate emails, your customers information, and they do it via encrypted traffic that the firewalls struggle to inspect. At scale, I mean, it's just a mess. Bottom line is hackers are exploiting traditional security infrastructures and they're doing it faster than ever using AI, so they're outpacing your defenses. Now's the time to think differently, to find a new way to rethink your security. Don't let the bad actors win. They're innovating and exploiting faster than you can defend.
You need Zscaler Zero Trust Plus AI. It stops attackers by well one. Hiding your attack surface. Your apps and IPs are invisible. Attackers, hackers, can't attack what they can't see. It also eliminates lateral movement.
Connecting user once you're in're in right, you're just not. You don't get part blanche. You can only access specific apps, not the entire network, and based on the permissions that you've been given. And and, by the way, zscaler Plus AI continuously verifies every request based on identity and context. Ai is really great for this. It also simplifies security management because of the AI power and automation. And they have to use AI because one of the things Zscaler does is they analyze half a trillion daily transactions, looking for malicious intent, malicious action, and that's a needle in a very big haystack. But the AI can really help find that Hackers can't attack what they can't see. Protect your organization with Zscaler Zero, trust and AI. Learn more at zscalercom slash security Z-S-C-A-L-E-R dot com slash security. Thank you, zscaler, for supporting Steve and security now, and you support us when you use that address, so they know you saw it here. Zscaler dot com slash security.
1:09:34 - Steve Gibson
Steve. So Firefox 135 was released one week ago last Tuesday and there's some interesting news about some new features. Despite having launched Firefox four days after last Tuesday, my Firefox was still on the previous 1.34 release. So I went to About Firefox and that's how I saw that I was on 1.34, and it said you know, update or upgrade or whatever. I clicked the button and it did that and restarted. I was first greeted with a big page telling me that I'm now able to edit PDFs directly in Firefox, which may indeed come in handy. But beyond that, firefox translations now supports more languages than ever. Pages in simplified Chinese, japanese and Korean can now be translated, and Russian is now available as a target language for translating into some Russian site that I went to when I was pursuing news for the podcast and it came up unintelligible, but there was that little translation icon at the right hand of the URL.
I clicked it and blink. It turned it into English. So actually I think some of the text that's in here is from the translation. So very handy. Also, credit card autofill is now being rolled out gradually to all users globally and, as I mentioned, ai chatbot access is now also being gradually rolled out. I already had it when I updated. You choose the AI chatbot from the sidebar list of available sidebars or you can go to Firefox Labs under the settings page in order to find it. Then you choose which provider you want and so forth. I'll talk about that in more detail in a second.
Firefox also enforces certificate transparency, meaning that web servers must provide sufficient proof that their certificates were publicly disclosed before they will be trusted. And this only applies to servers using certificates that were issued by a certificate authority in the Mozilla's root CA program. But that's all mainline certificate authorities. So that's just tightening up Firefox's public key certificate management. Also good news CRLite, which we've talked about, that's the Bloom filter-based CRL revocation system is also now being gradually rolled out. Zillow several times a day, updating a master bloom filter which our browsers will download, and then we will, and then our we will be doing browser side revocation checking with very short delay and no privacy concerns. Our browsers will not be reaching out to anybody asking whether the certificates that they're receiving from web servers are still valid.
Firefox now includes they wrote safeguards to prevent sites from abusing the history API by generating excessive history entries. I'm sure we've run across this. It bugs me when this happens. That makes navigating with the back and forward buttons difficult by deliberately cluttering up the history. You know you go to a page and it refers you to another page, but then the back arrow doesn't allow you to get back to where you came from. Sometimes you're able to hit back very quickly several times in order to get around that, but not always. So they've built that in so that only that history is no longer can no longer be inserted through JavaScript API without the user actually taking actions that create a breadcrumb history. They also said that the do not track checkbox has been removed from preferences. That's only because it's been incorporated into the global privacy control. So GPC is where that much stronger protection has been incorporated, and the Copy Without Site Tracking menu item was renamed.
Copy Clean Link named Copy Clean Link. Basically, if you're copying a link that has tracking crap in it. Firefox will remove that debris in order to give you a clean link. So it's now called Copy Clean Link rather than Copy Without Site Tracking. And that's about half of those. Those are the most interesting half of the changes that are now in 135.
Being a user myself, as we know, of ChatGPT the idea of having it even more handy in my Firefox sidebar, where normally I always have tree-style tabs open there that's intriguing. As I said, I already have access to it. It'll be interesting to see what percentage of our listeners do. They're saying it's being rolled out, but it already came to me. It's.
Control-alt-x is the shortcut which immediately jumps you to the AI chat in the sidebar.
Chat in the sidebar and at the moment, anthropics, claude, chatgpt, google's Gemini, hugging Cat and LeChat Mistral are the various AIs that are supported.
You're able to choose among them and jump around them dynamically as well. So, anyway, you can also, if you go to the settings page under the hamburger menu icon in the upper right and then under settings over on the left, go to Firefox Labs. You're able to enable it and see if it's available on your browser if you couldn't get to it in the sidebar. So, anyway, a bunch of cool things added to our favorite browser that, leo at least you and I use it, and I know that a lot of our listeners do too. The United States National Security Agency, our NSA, in coordination with our four partner countries which together form the Five Eyes Alliance, has just released the latest guidance on securing network edge devices. I'm just going to share their joint announcement, which is relatively short, but I know that many of our listeners have frontline responsibility in their enterprises with a great many necessarily exposed devices on the edge meaning you know the network edge, typically the edge where the internet connects to the enterprise.
So the NSAgov sites release of this in coordination. It was dated from Fort Meade, maryland, and they said the National Security Agency has joined the Australian Signals Directorate's Australian Cybersecurity Center, the Canadian Center for Cybersecurity. It ought to be CISs Cybersecurity Information Sheets that highlight critically important mitigation strategies for securing edge devices, including firewalls, routers and virtual private network gateways. Collectively these reports are mitigation strategies for edge devices and the first sheet is the executive guidance. The second is mitigation strategies for edge devices, the practitioner's guidance, and then the other is security considerations for edge devices. They said they provide a high level summary. So I've got links in the show notes to the announcement from the NSA of this and in the announcement are the links to each of those three reports. And the executive guidance is a broad overview Know the edge, procure secure by design devices, apply hardening guidance and so forth. You know sort of basic stuff. The security guidance is much more detailed and very useful.
But it occurred to me that for our listeners it's always useful to have a checklist right Just to go through and say yep took care of that, yep considered that, yep considered that and that's nice for coverings ones. But if anything does happen you're able to say well, you know, we're in full compliance with the NSA's latest guidance. And there's also a sheet you can give to your boss and say look, boss, we need to buy some stuff here because our stuff won't do what the NSA is telling us we need to do. So useful info.
I mentioned Netgear at the top. Anyone having a recent Netgear Wi-Fi 6 access point or Netgear Nighthawk gaming router should be very sure that you're running the latest recently released, as of last week, firmware. Make sure now three Net and WAX-220. Those three models, until and unless updated, all contain highly critical CVSS 9.6 authentication bypass vulnerabilities and we know what that means. If there's anything exposed to the internet, there's now a way for bad guys to get in and, as I said, I already know that as a follower of this podcast, you would never enable any internet-facing remote management capabilities.
1:20:16 - Leo Laporte
No.
1:20:17 - Steve Gibson
But it's also human to assume that it could never happen to you. So please make sure that you're running the latest firmware as of last week and, better yet, arrange to never be vulnerable in the first place by not opening any of those sorts of ports. So those three Wi-Fi routers were vulnerable to a now patched authentication bypass with that 9.6 out of 10. But three other Netgear Nighthawk gaming routers rated an even higher CVSS score of 9.8 for their unauthenticated remote code execution vulnerabilities, for their unauthenticated remote code execution vulnerabilities. The three affected routers are the XR500, the XR1000, and the XR1000V2. They are all Nighthawk Wi-Fi 6 Pro gaming routers. If any of those numbers sound familiar, and especially if you or someone you know may have been unable to resist the temptation of enabling any sort of remote access, you'll want to update them to the latest firmware immediately. I saw no reports of this being a zero day. As far as I know, the vulnerability was responsibly reported to Netgear responsibly reported to Netgear. But we also know that once it is known that these problems exist, bad guys can reverse engineer the firmware in the unpatched routers, figure out how to get in and then start attacking. So there's a window here. You want to make sure that you're not vulnerable within that time period. Okay, cis internals.
There was a surprising bit of news involving the much-beloved CIS internals tools. As many of our listeners know, they were a collection and still are of truly unique and powerful utilities that were originally created by Mark Rusanovich and Bryce Cogswell. Their little Texas-based company was purchased lock, stock and barrel by Microsoft back in 2006. Much to many people's chagrin, since everyone was quite worried at the time that it might spell the end of that fabulous and really irreplaceable tool set. Fortunately, that didn't happen and the tools remain available today from Microsoft and are still being maintained and upgraded, which makes this news of a recent discovery all the more curious and troubling, news of a recent discovery all the more curious and troubling. A software engineer by the name of Rake Schneider has reported that he has discovered DLL hijacking bugs in the Sysinternals tools. Oh, in fact it's this guy's page. It's this guy's page. It's written in German and it was Firefox's built-in translator that allowed me to turn it into English. So the curious and troubling part is that, after a 90 day responsible disclosure window. So Rake's detailed public disclosure reads and this is just the beginning of it he said quote I've identified and verified critical vulnerabilities in almost all CIS internals tools and presented the background and attack in a video.
A summary of the weak spot and the link to the video can be found here in this blog post. These tools, developed by Microsoft and actually originally CIS internals, of course are widely used in IT administration and are often used for analysis and troubleshooting. The vulnerability demonstrated in the video affects numerous applications of the suite and allows attackers to use DLL injection to inject and execute defective code. And now okay, that may be part of the translation we know it could be not defective but malicious code. And he said, now that more than 90 days have passed since the initial disclosure to Microsoft, it's time to talk about it. And then he goes on to do so. I have a link to his posting in German in the show notes, and if you've got a translator built into your browser and don't speak German, then it'll do a good job of translating it in the English for you.
1:25:11 - Leo Laporte
Actually, my translation and I'm not sure where it came from says malicious code.
1:25:15 - Steve Gibson
Ah, interesting.
1:25:17 - Leo Laporte
So this is ARC, so I don't know what translator it's using. Interesting Probably not.
1:25:21 - Steve Gibson
Google, okay, so I don't know what translator it's using. Interesting, probably not Google, okay. So the problem is it's a well-known and common problem with Windows DLLs, where, and among many problems, dlls made sense back when we had 128 megabyte Windows 2 computers, because it was a way of sharing code and the idea would be that, rather than various applications all needing to bring their own code along not only because we had applications that were sharing 20 megabyte hard drives or floppies, but because there wasn't much- RAM.
So you just wanted to be able to share these libraries. Great idea back then. Today it's pure legacy. It absolutely makes no sense whatsoever, but there's never been a point in time where Microsoft could break this, so we still have it today.
So what happens is the Windows executable file loader, when it's loading an executable file, is able to in the executable. The executable declares the DLLs that it's reliant upon, the system code DLLs that it needs, and so the executable file loader loads those for the executable so that they're there and linked up to it and ready to go. It first looks in the application's own directory. That is where the XE is being run from, and this behavior was originally deliberate, since it allowed applications to bring along their own more recent or maybe even older versions of DLLs. This is where the whole DLL thing began to fall apart, because they would then be loaded and used preferentially over whatever same-named DLLs the system might already or might not have. The problem is that convenience feature can be readily abused. In the case of the Sysinternals executables, they're not relying upon any of their own DLLs. This is actually one of the things that makes them so nice is that they are single executables that just get their jobs done Very clean. But like all Windows applications, they do rely heavily upon many system DLLs. But rather than insisting that the system DLLs they require be loaded from within the system's own protected directories, as they should, the Sysinternals apps use the default behavior where Windows will first look inside the app's own directory and this enables the exploit. Bad guys can place a DLL that's named the same as a system DLL in SysInternal's execution directory and it will be loaded instead of the intended system DLL will be loaded instead of the intended system DLL.
This flaw has been widely picked up and reported by the tech press over the past few days. The reporting notes that many of the sysinternals utilities prioritize DLL loading from untrusted paths, such as the current working directory or network paths, before looking in secure system directories for their DLLs. One piece of this reporting wrote the vulnerability was responsibly disclosed to Microsoft on October 28, 2024. However, microsoft has classified it as a defense-in-depth issue rather than a critical flaw. This classification implies that mitigation relies on secure usage practices rather than addressing it as a fundamental security defect. While Microsoft emphasizes running executables from local program directories, researchers argue that network drives, where the current working directory becomes the application's execution path, pose significant risks, as indeed they do. So what's most significant here is the, to me, is the breadth of press coverage and reporting that this news has generated. I mean, this got picked up because this internals is so popular. This got picked up everywhere.
1:30:08 - Leo Laporte
Everybody uses it yeah.
1:30:09 - Steve Gibson
Yes. So we've seen Microsoft respond when sufficient noise is made. We saw how quickly they backpedaled on the first release of their Copilot Plus recall screen scraper, so I would imagine that the amount of bad press that is being generated here will result in someone's attention being pointed at updating all of the vulnerable sysinternal tools. I suspect Microsoft is regretting that they blew this off and said oh, it's not our problem, you just have to be careful how you use them. It's like okay, good luck with that. Unfortunately, there's no update mechanism for the bazillion copies of Sysinternals tools that have already been downloaded and are deployed.
1:31:03 - Leo Laporte
They will never be updated interesting lest they're manually replaced they all have this behavior, yikes yeah okay, yeah, this creates an enduring opportunity for exploitation what is a defense in depth issue? What does that mean? That just, you should be careful. Yeah, yeah, exactly, this is like a job, yeah exactly like.
1:31:25 - Steve Gibson
Well, yes, but you know these are advanced sleuthing tools, right? So you shouldn't leave them around on computers where they could be exploited. Yeah, thanks, but everybody does yeah, are the is is.
1:31:40 - Leo Laporte
Uh, I imagine it's a custom dll that the sysinternals run. In fact, they probably have a common D in that DLL.
1:31:48 - Steve Gibson
No, no, it's like kernel32.dll needs to get loaded.
1:31:52 - Leo Laporte
Oh, it should definitely be getting that from the secure directory.
1:31:55 - Steve Gibson
Exactly, and they don't.
1:31:57 - Leo Laporte
Oh, that's nuts.
1:31:59 - Steve Gibson
So someone malicious names their malicious code kernel32.dll. Puts it where the sysinternals tool is and that's the one that gets loaded, isn't?
1:32:08 - Leo Laporte
this a widespread problem though in Microsoft. Yeah, yeah.
1:32:13 - Steve Gibson
I mean it requires overriding Windows standard default behavior, which they can't change because it will break things that depend upon it.
1:32:25 - Leo Laporte
They didn't used to have a secure place to store those DLLs actually Security was never a consideration.
1:32:33 - Steve Gibson
If you've got a Windows 2 machine with floppy disks, this is what security. Why not let a windows metafile execute code in the image?
1:32:45 - Leo Laporte
because that might come in handy you raise an excellent point, though I think that dlls are just running on inertia. There's no, you don't need it anymore.
There was never a time when they could afford to break this, I mean you know, I mean on linux, you have static linked uh executables, I mean that's, they're bigger because of it, but then you don't have that problem. You don't have libraries and you don't have the dll hell that you get with windows, right with conflicting dll versions and so forth. Yeah, maybe it's time to think about getting rid of those.
1:33:20 - Steve Gibson
Just don't do it anymore. Maybe go VM happy and execute each.
1:33:26 - Leo Laporte
XE in VM. That was the plan.
1:33:29 - Steve Gibson
Yeah, I know it's a mess. Okay, Google removes the ban on using AI for harm. What Last Tuesday, Wired covered an interesting change in Google's policies regarding the conduct and use of its AI. Wired's headline was, quote Google lifts a ban on using its AI for weapons and surveillance.
1:33:57 - Leo Laporte
Well, it's about time. That's right. This is the. When you talk about the existential threat of AI, this is the first thing that leaps into my mind. Right, Don't have autonomous nuclear weapons.
1:34:10 - Steve Gibson
Yes, the tagline on Wired's coverage said Google published principles in 2018, barring its AI technology, such as it was, from being used for sensitive purposes. Weeks into President Donald Trump's second term, those guidelines are being overhauled. Okay, now I have no idea why Wired referred to our current president's administration, since there's no reason I can see to believe that there's any connection between the two. Here's what Wired wrote. Can see to believe that there's any connection between the two. Here's what Wired wrote. They said Google announced Tuesday that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. The company removed language promising not to pursue quote and finally quote technologies that gather or use information for surveillance, violating internationally accepted norms. And finally quote technologies whose purpose contravenes widely accepted principles of international law and human rights. All that language was taken out.
The changes were disclosed in a note appended to the top of a 2018 blog post unveiling the guidelines saying, quote we've made updates, use of AI, evolving standards and geopolitical battles over AI as's decision to work on a US military drone program. In response, it declined to renew the government contract and also announced a set of principles to guide future uses of its advanced technologies, such as artificial intelligence. Among other measures, the principles stated Google would not develop weapons, certain surveillance systems or technologies that undermine human rights, but in an announcement on Tuesday, google did away with those commitments. The new webpage no longer lists a set of banned uses for Google's AI initiatives. Instead, the revised document offers Google more room to pursue potentially sensitive use cases. It states Google will implement quote appropriate human oversight, due diligence and feedback mechanisms to align with user goals, social responsibility and widely accepted principles of international law and human rights. Unquote. Google also now says it will work to quote mitigate unintended or harmful outcomes unquote.
Which, okay, still says some of the same things, though maybe a little less pointedly. James Manika, google senior vice president for research, technology and society, was quoted. So this guy, a Google person quote. We believe democracy should lead in AI development guided by core values like freedom, equality and respect for human rights. Unquote. Right. And Demis Hassabis, the CEO of Google's esteemed AI research lab, deepmind, said quote and we believe that companies, governments and organizations sharing these values should work together to create AI that protects people, promotes global growth and supports national security. Unquote. They added that Google will continue to focus on AI programs that quote align with our mission, our scientific focus and our areas of expertise and stay consistent with widely accepted principles of international law and human rights. Unquote. At the same time, multiple Google employees expressed concern about the changes in conversations with Wired.
Okay, well, my own feeling is that we should not read much into this, and I guess I salute Google for being upfront about it. I mean, they're not hiding at all behind the fact that they've changed their wording on this. You know, these guidelines were first created seven years ago, back in 2018. Seven years in AI timeframe, you know is Jurassic. In AI timeframe, you know is Jurassic. The world of AI has obviously been dramatically transformed since then, and I suspect that this is just Google being upfront about needing to operate on a level playing field alongside everyone else. You know they could have left that language there and ignored it if necessary. You know they're not saying that they're going to proactively do bad. They're just saying that they're going to proactively do bad. They're just saying that they're going to abide by the same rules as everyone else. So, okay, we got a few more things to talk about, leo, an open AI jailbreak, and I've got a bit of my own news. Let's take another break and then we will continue gladly, gladly.
1:40:07 - Leo Laporte
Uh, right now we would like to tell you a little bit about one of our sponsors for the hour. Uh, us cloud, a name I hope you know by now. We've been talking about it for a while. They are the number one microsoft unified support replacement. Can I make a shameful admission? When they first came to us for advertising, I said I never heard of you guys. I guess I'm just ignorant, because they are literally the number one company in this.
And actually, after talking to them for a while over the last few months, I've gotten to really know US Cloud and I understand now why they are the global leader in third-party Microsoft support for enterprises. They now support 50, 5-0 of the Fortune 500. And there are three reasons really. They said you know well, what we find is our customers are really excited because they can save 30 to 50% over Microsoft Unified and Premier support. And I said well, that's good, you're a lot less expensive, as much as half as much. But are you better? And they said well, yes, we're a lot better. We're twice as fast on average time to resolution than Microsoft. That's a lot, by the way, that's a big deal. When everything's down, you're whatever, your network's collapsed. Your hair is on fire. Twice as fast response time is a big deal. So there's two of the three. Uh, they're a lot less expensive. They're twice as fast. They're also better. They they have engineers with an average of 16 years experience in break-fix. They know what they're talking about. They are the guys you want. The guys and gals you want on the other end of the line when everything's falling apart. So now you're getting the picture right Twice as fast, smarter and half the cost. Sounds pretty good.
But now US Cloud is excited to tell you about a brand new offering. This is something I think a lot of us can use. It's their Azure cost optimization services. So let's be honest when was the last time you evaluated your Azure usage? If it's been a while, you probably have some. You know Azure sprawl little spend creep growing on. Have some Azure sprawl little spend creep going on. It's easy, right? You forget a server there, you spin up an instance there and you just don't do anything. The good news is, saving on Azure is easier than ever with US Cloud.
Us Cloud offers an eight-week Azure engagement. It's powered by VBox and identifies key opportunities to reduce costs across your entire Azure environment. You're not just going to flip a switch and walk away. You're going to get expert guidance and access to US Cloud senior engineers that, as I said, an average of 16 years with Microsoft products. They're super smart. By the end of those eight weeks, your interactive dashboard will identify, rebuild, downscale opportunities, unused resources, allowing you to reallocate those precious IT dollars that are getting wasted right now towards needed resources. If I may make a suggestion, you could invest those Azure savings into US Cloud's Microsoft support, like a few of other US Cloud's other customers. Completely eliminate your unified spend. Now the savings goes on right, on and on and on.
I'll give you one testimonial review we got recently from Sam. He's the technical operations manager at Bead Gaming. He gave US Cloud five stars, saying quote, and this is about the Azure engagement. Actually, we found some things that had been running for three years which no one was checking. These VMs were, I don't know, 10 grand a month not a massive chunk in the grand scheme of how much we spent on Azure, but once we got to $40,000 or $50,000 a month, it really started to add up. End quote. It's simple Stop overpaying for Azure, identify and eliminate Azure creep and boost your performance All in eight weeks with US Cloud.
I think you're going to be very impressed. This is a service everybody should know about. Uscloudcom. Book a call today to find out how much you can save USCloudcom Book a call. Better, faster Microsoft support for a whole heck of a lot less. That's a pretty good deal, uscloudcom. We thank him so much for supporting steve and the work he does on security. Now don't you forget, if they say, oh, how'd you hear about us? You just you tell security now. Okay, all right, let's talk about I, by the way, quoting last week's episode all week long, with the bottom line being there is really no AI that hasn't been jailbroken. Ai safety is an illusion, basically.
1:44:57 - Steve Gibson
And my intuition is that we're going to have a hard time putting guardrails around AI. I mean we were surprised when this worked at all. There's still big questions surrounding how does it work? At all, we don't really even know how it works. No, so you know it's like okay, if you don't know how it works, how are you going to tell it not to talk about some things that it knows?
1:45:29 - Leo Laporte
The old hacker creed was information wants to be free. Ai wants to be free. It wants to help you, it wants to tell you what you want to know. Pretty hard to stop it.
1:45:56 - Steve Gibson
Pretty hard to stop it. Whose title is Principal Vulnerability Researcher at CyberArk. Aaron's post reads OpenAI recently released the O3 family of models right, that's the top end best there is right now, showcasing significant advancements in reasoning and runtime inference. Given its expected widespread use in development, ensuring it does not generate malicious code is crucial. Openai has strengthened its security guardrails, mitigating many previous jailbreak techniques and, Leo, to your point, you called it whack-a-mole last week, and that's exactly right. I mean it's like, oh, how about try this? Oops, okay, let's go fix that?
1:46:45 - Leo Laporte
How about this?
1:46:46 - Steve Gibson
Oops, oh good, let's go fix. That Doesn't feel solid, he said. However, using our open source tool, fuzzy AI, we successfully jailbroke O3. Oh boy, get this Extracting detailed instructions on injecting code into LSASSexe, including a breakdown of the obstacles involved, ultimately leading to functional exploit code. Oh my God, okay Now.
1:47:21 - Leo Laporte
LSASS always shows up in any list of running windows processes I know I've googled it saying what the hell is this ls?
1:47:31 - Steve Gibson
yes, lsass stands for local security authority subsystem service. It is the security god of Windows, because it's the Windows process that manages user authentication and all security policies. Being able to inject attack code into that process would create the mother of all privilege escalation and restriction bypasses. Privilege escalation and restriction bypasses. And these guys tricked chat gpt's latest and most powerful code generating o3 model, to write the code to do just that, which, on the one hand, is extremely impressive.
1:48:19 - Leo Laporte
right like wow that, wow, it wrote the code Wow.
1:48:26 - Steve Gibson
I mean it is extremely depressing, but it's also very worrisome. Yes, right, he said. While AI security is improving, our findings indicate that vulnerabilities still exist, highlighting the need for further safeguards. We've opened a Discord community for our open source tool. You're welcome to join.
1:48:47 - Leo Laporte
I just tried to join it. It's gone.
1:48:51 - Steve Gibson
Really.
1:48:51 - Leo Laporte
Yeah, I'm not sure if they got booted or.
1:48:55 - Steve Gibson
I got an invite when I went there yesterday. Is it there? Were you able?
1:48:59 - Leo Laporte
to get in.
1:49:00 - Steve Gibson
I did yesterday. Oh, okay, using that link.
1:49:03 - Leo Laporte
Oh, maybe, there's something I'm doing wrong, then I'll try again.
1:49:07 - Steve Gibson
It's waiting to come up and yep, I got one.
1:49:09 - Leo Laporte
Oh no, Invalid invite.
1:49:11 - Steve Gibson
Yeah, that's what I got Invite may be expired or you might not have permission to join, right? Yep, that's what I got. So probably, probably, and it was a LinkedIn obscured link which I de-LinkedInified. But you can also go to his GitHub page. It's githubcom, slash cyberark C-Y-B-E-R-A-R-K and then look for the project Fuzzy AI and in fact it might be that if you go there you'll find the Discord invite. Anyway, the Fuzzy AI GitHub page says the Fuzzy AI Fuzzer is a powerful tool for automated LLM fuzzing.
It's designed to help developers and security researchers identify jailbreaks and mitigate potential security vulnerabilities in their LLM APIs. It features comprehensive fuzzing techniques leverage mutation-based, generation-based and intelligent fuzzing. Built-in input generation. Generate valid and invalid inputs for exhaustive testing. Seamless integration. Easily incorporate into your development and testing workflows and extensible architecture. Customize and expand the fuzzer to meet your unique requirements, unique requirements, and it supports Anthropix, claude, openai's GPT-4.0, gemini's, gemini Pro, azure's GPT-4 and GPT-3.5, turbo Bedrock that has Claude, ai21's, jamba, deepseek's, both V3 and V1, and Olama and both Llama and Dolphin Llama. So this sort of research and experimentation is exactly what is needed. So a big bravo to CyberArk and Leo.
It feels to me as though the problem is inherently intractable and I'm sure this is a major source of anxiety for AI developers. The problem is that you're not going to get a clean edge, you're not going to be able to create a clean boundary. My point is, in order to prevent these things from answering questions you don't want them to, from generating code you don't want them to, you're going to have to, so restrict them that they are no longer able to answer questions you do want them to and generate code you would like them to be able to do, because there just isn't. You know, like, what's the boundary between something malicious and not? It's sort of your view, right, it's. You know, one person's malicious code is another person's requirement for solving a problem in IT. So you know we're dealing with fuzzy definitions, it's very subjective and when your definitions are fuzzy how can you expect the AI to?
make that determination Right.
1:52:53 - Leo Laporte
All you can do is give it a list of words and things you know. I mean, that's really all they're doing is saying if somebody says TMN square, make sure you don't say anything about that, and that's an infinite list. You can never get everything Right.
1:53:08 - Steve Gibson
And you can get around it. What is fuzzing so?
1:53:10 - Leo Laporte
fuzzing in this context? Is it the same as fuzzing in other exploit generating?
1:53:18 - Steve Gibson
Yeah, it's basically just trying to confuse it. It's almost like randomized prompts.
It's not the same thing, in as much as fuzzing data to a port, is very specific, but yes, it is feeding gibberish in and seeing what comes out, and we know that the models grow the context over time and that that context is one of the things that gives them the power that they have. For, you know, it's what makes the model interactive and allows you to say, oh, I'm sorry, I didn't explain what I wanted correctly, I meant more like this and allows you to set up scenarios that allow models to be tricked.
So the more we automate this and the more crap we throw at the wall the more we're able to see whether we're able to get an answer that the designers didn't intend the model to produce, so it'll be useful for the designers as well.
1:54:39 - Leo Laporte
It's a mess, leo, and also it can do it at speed yes, which is a big advantage. You can throw a mess, leo, and also it can do it at speed yes, which is a big advantage.
1:54:49 - Steve Gibson
You can throw a lot of stuff at it, as long as you're able to afford the API cost it's not going to be cheap to run lots of deep inferences, as this fuzzing would require, but costs are going to come down too. But costs are going to come down too. Next page is something that I'm very excited to share.
1:55:08 - Leo Laporte
I like this. Oh, I like this.
1:55:12 - Steve Gibson
I saw it for the first time myself yesterday evening. It is a screenshot of GRC's DNS benchmark, which is the first ever simultaneous multi-protocol benchmark of name servers, showing DNS over HTTPS, dns over TLS, ipv4 and IPv6 name servers all being benchmarked at once and with their performance compared against each other Wow, and the preliminary results are interesting. That fastest of all is at the very top there is NextDNS's DNS over HTTPS name server. Well, that's what I use, but you have to be using HTTPS, so you have to be using DNS over HTTPS, so you would need to configure your web browser to do that. My guess is it's fastest because it's not being heavily used yet no one's using it.
1:56:27 - Leo Laporte
Uh-huh yeah.
1:56:29 - Steve Gibson
And in the number two place you'll notice under the bars at the top it says determining ownership.
1:56:35 - Leo Laporte
Yeah.
1:56:36 - Steve Gibson
That's still old code. The benchmark always used to just resolve IP addresses. So there's a system called sender base that allows you to give it an IP and look up the owner of that IP space. Well, that's not widely supported for URL-based name servers, which is what DOH and DOT are. So my point is that what will be shown shortly I mean this just came to life last night.
1:57:06 - Leo Laporte
This is brand new. Yes, and we should mention that these results are local to you. That's why everybody needs to run their own.
1:57:13 - Steve Gibson
That's exactly right. From my location in Cellar California, that's what I saw and, in fact, what? What isn't there is? Normally these bars would have been squished way down by the other name servers that were so slow by comparison. I deleted them in order to so that, only so, so that you you can see at the bottom is that one green bar is the one that is the slowest of all. That is what set the scale for everything else. Anyway, the. The second one at the front from the top is quad nines I see quad one and quad nine.
Yeah, yeah, yeah uh, yeah, and quad nines. Is that that that second fastest uh dns over tls? So yeah, so from my position in southern california's what I saw, I've got more work to do on the UI, but I will be producing the fifth release of this for testing by our gang, probably in the next few days. Is this multi-threaded? It must be.
1:58:14 - Leo Laporte
Oh, my God.
1:58:16 - Steve Gibson
It's crazy. Multi-threaded. I mean everything is running at once.
1:58:21 - Leo Laporte
That's so cool. It is really. How many processors do you have in this machine?
1:58:27 - Steve Gibson
It'll run multi-thread. Remember everything's in Assembler. It's still only a couple hundred K because it is super efficient and actually doing DNS queries is not time consuming. It's just sending a short packet out and then timing how long it takes for it to come back. So, yeah, it's a massively parallel application. Oh, that's awesome, but it's starting to come to life. So, anyway, I'm very, very happy to be able to share that and show it, and I'll get it done.
1:59:00 - Leo Laporte
Yay.
1:59:01 - Steve Gibson
Okay, we're going to now talk about the hidden fact of ransomware attacks in K through 12 schools in the US. I don't know whether or not it would come as a surprise that hiding school cyber attacks is a thing. You know, it might come as a surprise that it's actually a job description.
1:59:33 - Leo Laporte
Really.
1:59:34 - Steve Gibson
It is. There are people whose job description is hiding school cyber attacks, and they're being paid to do it. Cyber attacks, and they're being paid to do it. So exactly one week ago, last Tuesday, the website of an organization called the 74 published an eye-opening piece of investigative journalism that I knew would make a terrific topic for the podcast. As I said at the top of the show, 74 stands for 74 million, which is the number of American school-aged children being educated from kindergarten through high school in the US. The 74's Code of Reporting Ethics states the 74 is a nonprofit, nonpartisan national news organization covering K-12 education. The organization's mission is to spotlight innovative thinking and models that are helping students succeed, to cover and analyze education policy and politics, and to use journalism to challenge the conditions that deny too many children access to a quality education. The 74 is committed to reporting stories without fear or favor, about what is working well for students and families, and to expose and hold accountable the systems that are failing them, and I took some time browsing around there and it looks like a neat organization. So last Tuesday, this group published a story titled Kept in the Dark Meet the hired guns who make sure school cyber attacks stay hidden. Here's what they reported. They said an investigation by the 74 shows that, while schools have faced an onslaught of cyber attacks since the pandemic disrupted education nationwide five years ago, district leaders across the country have employed a pervasive pattern of obfuscation that leaves the real victims in the dark. An in-depth analysis chronicling more than 300 school cyber attacks over the past five years reveals the degree to which school leaders in virtually every state repeatedly provide false assurances to students, parents and staff about the security of their sensitive information. At the same time, consultants and lawyers steer privileged investigations which keep key details hidden from the public. In more than two dozen cases, educators were forced to backtrack months and, in some cases, more than a year later, after telling their communities that sensitive information, which included, in part, special education accommodations, mental health challenges and student sexual misconduct reports that had not been exposed. While many school officials offered evasive storylines, others refused to acknowledge basic details about cyber attacks and their effects on individuals, even after the hackers made student and teacher information public.
The hollowness in schools' messaging is no coincidence, because the first people alerted following a school cyber attack are generally neither the public nor the police, district incident response plans, place, insurance companies and their phalanxes of privacy attorneys. First, they take over the response with a focus on limiting schools' exposure to lawsuits by aggrieved parents or employees. Attorneys, often employed by just a handful of law firms, dubbed breach mills by one law professor for their massive caseloads, hire forensic cyber analysts, crisis communicators and ransom negotiators on school's behalf, immediately placing the discussions under the shield of attorney-client privilege. Data privacy compliance is a growth industry for these specialized lawyers who work to control the narrative. As a result, students, families and district employees whose personal data was published online, from their financial and medical information to traumatic events in young people's lives, are left clueless about their exposure and risks to identity theft, fraud and other forms of online exploitation. Told sooner they could have taken steps to protect themselves. Similarly, the public is often unaware when school officials quietly agree in closed-door meetings to pay the cyber gangs ransom demands in order to recover their files and unlock their computer systems. Research suggests that the surge in incidents has been fueled, at least in part, by insurers' willingness to pay. Hackers themselves have stated that when a target carries cyber insurance, ransom payments are all but guaranteed.
In 2023, there were 121 ransomware attacks against the education sector globally in 2023, a 70% year-over-year surge, making it the worst ransomware year on record for education. Daniel Schwartz, a University of Minnesota law professor, wrote a 2023 report for the Harvard Journal of Law and Technology criticizing the confidentiality and doublespeak that shroud school cyber attacks. As soon as the lawyers often called breach coaches arrive on the scene, schwartz told the 74, quote there's a fine line between misleading and, you know, technically accurate. What breach coaches try to do is push right up to that line, and sometimes they cross it. The 74's investigation into the behind-the-scenes decision-making that undermines what, when and how school districts reveal cyberattacks.
School districts reveal cyber attacks is based on thousands of documents obtained through public records, requests from more than two dozen districts and school spending data that links to the law firms, ransomware negotiators and other consultants hired to run district responses. All of this otherwise kept going off the books and private. Of course, it also includes an analysis of millions of stolen school district records uploaded to cyber gangs' leak sites. Some of students' most sensitive information lives indefinitely on the dark web, while other personal data can be found online with little more than a Google search. Even as school districts deny that their records were stolen and cyber thieves boast about their latest score, the 74 tracked news accounts and relied on its own investigative reporting in Los Angeles, minneapolis, providence, rhode Island and Louisiana's St Landry Parish, which uncovered the full extent of school data breaches, countering school officials' false or misleading assertions. As a result, district administrators had to publicly acknowledge data breaches to victims or state regulators for the first time or retract denials about the leak of thousands of students' detailed psychological records.
In many instances, the 74 relied on mandated data breach notices that certain states like Maine and California report publicly. The notices were sent to residents in these states when their personal information was compromised, including numerous times when the school that suffered the cyber attack was hundreds and in some cases thousands of miles away. The legally required notices repeatedly revealed discrepancies between what school districts told the public early on and what they later disclosed to regulators after extensive delays. Some schools, meanwhile, failed to disclose data breaches, which they are required to do under state privacy laws. And for dozens of other schools, the 74 could find no information at all about alleged school cyberattacks uncovered by its reporting, suggesting they had never before been reported or publicly acknowledged by local school officials.
Education leaders who responded to the 74's investigation results said any lack of transparency on their part was centered on preserving the integrity of the investigation, not self-protection. School officials in Reed Springs, missouri, said quote when we respond to potential security incidents, our focus is on accuracy and compliance, not downplaying the severity. Unquote. Those in Florida's River City Science Academy said the school quote acted promptly to assess and mitigate risks, always prioritizing the safety and privacy of our students, families and employees and employees. In Hillsborough County Public Schools in Tampa, florida, administrators in the nation's seventh-largest district said they notified student breach victims by email, mail and a telephone call and set up a special hotline for affected families to answer questions.
Hackers have exploited officials' public statements on cyber attacks to strengthen their bargaining position. A reality educator's sight when endorsing secrecy during ransom negotiations. Doug Levin, who advises school districts after cyber attacks and is the co-founder and national director of the non-profit K-12 Security Information Exchange, said but those negotiations do not go on forever. A lot of these districts come out saying we're not paying the ransom, in which case the negotiation is over and they then need to come clean.
The paid professionals who arrive in the wake of a school cyber attack are held up to the public as an encouraging sign. School leaders announce reassuringly that specialists were promptly hired to assess the damage, mitigate the harm and restore their systems to working order. This promise of control and normality is particularly potent when cyber attacks suddenly cripple school systems, forcing them to shut down for days and disable online learning tools. News reports are fond of saying that educators were forced to teach students the old-fashioned way, with books and paper, but what isn't as apparent to students, parents and district employees is that these individuals are not there to protect them, but to protect schools from them. And Leo, let's take our final break, and then I'm going to uh, finish with this and then discuss it a little bit. Okay, good.
2:11:40 - Leo Laporte
It's a little upsetting.
Yeah, it is going on behind the scenes and you know deliberately obscured yeah, um, we'll talk more in just a bit, but first a word from one password. A little question for you, purely rhetorical, because I think you know the answer and I know the answer do your end users, those wonderful people working in your building, do they always work on company-owned devices? Sure, right. And IT-approved apps? Yeah, they never bring their phone in or their laptop. They never have a Plex server running. I didn't think so. So how do you keep your company's data safe when it's sitting on all those unmanaged devices running those unmanaged apps? Well, that's why one password is here. Extended access management, something brand new from one password, one password. Extended access management helps you secure every sign-in, every sign-in for every app on every device, because it solves problems traditional iam and mdM just can't touch.
Imagine your company's security. Like the quad of a college campus. There are nice brick paths between the buildings. Those are the company-owned devices, the IT-approved apps, the managed employee identities. It's all peaceful kingdom. And then there are the paths people actually use, the shortcuts worn through the grass that are actually the straightest line from point A to point B Unmanaged devices, shadow IT apps, the non-employee identities, like contractors on your network. Most security tools only work on the happy little brick paths, but many security problems occur on the shortcuts.
1password Extended Access Management is the first security solution that brings all these unmanaged apps, devices and identities under your control. It ensures that every user credential is strong and protected, every device is known and healthy and every app is visible. 1password is ISO 27001 certified, with regular third-party audits. It exceeds the standards set by various authorities. It's a leader in security and it's security for the way we work today. One password extended access management is now generally available to companies that use octa or microsoft entra. They're in beta for google workspace customers. You can try it right now. Secure every app, device and identity, even the unmanaged ones, at onepasswordcom security now. All lowercase, that's the number one password p-a-s-s-w-o-r-dcom. And don't forget this is very important. Security now. That's how they know. You saw it here onepasswordcom security now. We thank him so much for supporting Steve and the good work he's doing here at Security Now, and you support us too by using that address 1passwordcom slash security now.
2:14:36 - Steve Gibson
Steve. So when the Medusa ransomware gang attacked Minneapolis public schools in February 23, it stole reams of sensitive information and demanded $4.5 million in Bitcoin in exchange for not leaking it. District officials had a lawyer at Mullen Coughlin notify the FBI. So at the same time, officials were not acknowledging publicly that they had been hit by a ransomware attack. Their attorneys were telling federal law enforcement that the district immediately determined its network had been encrypted, promptly identified Medusa as the culprit and within a day had its quote third-party forensics investigation firm communicating with the gang regarding the ransom. Mullen-coughlin then told the FBI that it was leading a privileged investigation into the attack and that the school and, at the school district's request, quote. All questions, communication and requests in connection with this notification should be directed to the law firm. Mullen-coughlin did not respond to requests for comment. Minneapolis school officials would wait seven months before notifying more than 100,000 people that their sensitive files were exposed, including documents detailing campus rape cases, child abuse inquiries, student mental health crises and suspension reports. As of December 1st, all schools in Minnesota are now required to report cyberattacks to the state, but that information will be anonymous and not shared with the public.
One district took such a hands-off approach, leaving cyber attack recovery to the consultant's discretion, that they were left out of the loop and forced to later issue an apology. When an April 23 letter to Camden educators arrived, 13 months after a ransomware attack, it caused alarm. An administrator had to assure employees that the New Jersey district wasn't the target of a second attack. The letter was about the one more than a year ago. The attorneys had sent out notices after a significant delay and without the school's knowledge.
Other school leaders said when they were in the throes of a full-blown cyber crisis and ill-equipped and ill-equipped to fight off cyber criminals on their own, law enforcement was not of much use and insurers and outside consultants were often their best option. Ron Ringelstein, the executive director of technology at the Yorkville Illinois School District, said, in terms of how law enforcement can help you out, there's really not a whole lot that can be done, to be honest unquote. When the district was hit by a cyber attack prior to the pandemic, he said in a report, the FBI went nowhere. Instead, district administrators turned to their insurance company, which connected them to a breach coach who then led all aspects of the incident. Response under attorney-client privilege. Northern Bedford County Schools Superintendent Todd Beattie said the Pennsylvania district contacted the CISA to report a July 2024 attack, but quote the problem is there's not enough funding and personnel for them to be able to be responsive to incidents. Unquote and too many incidents. Meanwhile, john Van Wagener, the school superintendent in Traverse City, michigan, claims insurance companies and third-party lawyers often leave district officials in the dark too. Their insurance company presented school officials with a choice of several cybersecurity firms they could hire to recover from a March 2024 attack, van Wagener said, but he quote didn't knowed district officials to the extent of the massive breach that forced school closures and involved 1.2 terabytes of stolen data.
Response records obtained by the 74 show that a small group of law firms play an outsized role in school cyber attack recovery efforts throughout the country. Among them is McDonald Hopkins, where Michigan attorney Dominic Paluzzi co-chairs a 52-lawyer data privacy and cybersecurity practice. Some call him a breach coach. He calls himself a quarterback. After establishing attorney-client privilege, paluzzi and his term call in outside agencies covered by a district cyber insurance policy, including forensics analysts, negotiators, public relations firms, data miners, notification vendors, credit monitoring providers and call centers. Yeah, and who pays for this? The taxpayer. Across all industries, the cybersecurity practice handled 2,300 incidents in 2023, 17% of which involved the education sector, which Paluzzi noted is not quite always the best when it comes to the latest protections unquote.
When asked why districts' initial response is often to deny the existence of a data breach, paluzzi said well, it takes time to understand whether an event rises to the level that would legally require disclosure and notification. Paluzzi said quote it's not the time to make assumptions, to say we think this data has been compromised, until we know that. Until we know that If we start making assumptions, that starts our clock on legally mandated disclosure notices. We're going to have been in violation of a lot of the laws. And so what we say and when we say it are equally important, which is why there are so many jokes about attorneys, of course, in other words, finessing the system. They said you know, once we've acknowledged that a breach has occurred, notification requirement clocks start ticking. So the longer we wait to acknowledge, apparently even to themselves, that anything more serious than an incident is being investigated, the better. He said in the early stage, lawyers are trying to protect their client and avoid making any statements they would later have to retract or correct.
Paluzzi said quote while it often looks a bit canned and formulaic, it's often because we just don't know and we're doing so many things we're trying to get it contained, ensure the threat actor is not in, a data breach is confirmed. He said only after a full forensic review, a process that can take up to a year, and often only after it's completed, are breaches disclosed and victims notified. He said we run through not only the forensics but through the data mining and document review effort. By doing that last part, we're able to actually pinpoint for John Smith that it was his social security number right, and Jane Doe that it's your medical information. He said we try in most cases to get to that level of specificity and our letters are very specific. Sounds like a lot of billable hours to me. Makes you sort of wonder whether the cure is worse than the know is worse than the disease.
According to they wrote a 2023 blog post by attorneys at the firm Troutman Pepperlock. Targets that respond to cyber attacks without the help of a breach coach often fail to notify victims and, in some cases, provide more information than they should. When entities over-notify, they quote increase the likelihood of a data breach class action lawsuit in the process. Unquote Companies that under-notify may reduce the likelihood of a data breach class action, but could instead find themselves in trouble with government regulators. Wow, what a mess For school districts and other entities that suffer data breaches legal fees and settlements are often among their largest expenses.
Yeah, that's a shock. Law firms like McDonald Hopkins that manage thousands of cyber attacks every year are particularly interested in privilege, said Schwartz, the University of Minnesota law professor, who wonders whether lawyers are necessarily best positioned to handle complex digital attacks. In his 2023 Harvard Journal report, schwartz writes that the promise of confidentiality is Breach Coach's chief offering. The report argues that by inflating the importance of attorney-client privilege, lawyers are able to retain their primacy in the ever-growing and lucrative cyber incident response sector. Similarly, he said, lawyers' emphasis on reducing payouts to parents who sue overstates schools' actual exposure and is another way to promote themselves as providing a tremendous amount of value by limiting the risk of liability. By providing a shield. A shield, their efforts to lock down information and avoid paper trails, he wrote, ultimately undermine the long-term cybersecurity of their clients and society more broadly. School cyber attacks have led to the widespread release of records that heighten the risk of identity theft for students and staff and trigger data breach notification laws that typically center on preventing fraud. Yet files obtained by the 73 show school cyber attacks carry particularly devastating consequences for the nation's most vulnerable youth. Records about sexual abuse, domestic violence and other traumatic childhood experiences are found to be at the center of leaks. Childhood experiences are found to be at the center of leaks, and hackers have leveraged these files, in particular, to coerce payments. In Somerset, massachusetts, a hacker using an encrypted email service extorted school officials with details of past sexual misconduct allegations during a school show choir event. The accusations were investigated by local police and no charges were filed. The hacker threatened school officials and records obtained by the 74 by writing. Quote I am somewhat shocked with the contents of the files because the first file I chose at random and he didn't say stuff. If the other files are as good, we regret not setting a higher price. Unquote.
Danielle Citron, a University of Virginia law professor, argues that a lack of legal protections around intimate data leaves victims open to further exploitation. Leaves victims open to further exploitation. She notes that the exposure of intimate records presents a situation where vulnerable kids are being disadvantaged again by weak data security. And, of course, keeping all of this secret and in the dark doesn't improve data security. Danielle said it's not just that you have a leak of information, but the leak then leads to online abuse and torment. Meanwhile in Minneapolis, an educator reported that someone withdrew more than $26,000 from their bank account after the district got hacked. In Glendale, california, more than 230 educators were required to verify their identity with the IRS after someone filed their taxes fraudulently. In Albuquerque, where school officials said they prevented hackers from acquiring students' personal information, a parent reported being contacted by the hackers, who placed a strange call demanding money for ransoming their child. Strange call demanding money for ransoming their child.
Nationwide, 135 state laws are devoted to student privacy, yet they are all unfunded mandates with no enforcement. All 50 states have laws that require businesses and government entities to notify victims when their personal information has been compromised, but the rules vary widely, including definitions of what constitutes a breach, the types of records that are covered, the speed at which consumers must be informed and the degree to which the information is shared with the general public. It's a regulatory environment that breach. Coach Anthony Hendricks with the Oklahoma City law firm Crowe Dunlevy calls the multiverse of madness. Hendricks said, quote it's like you're living in different privacy realities based on the state you live in. Unquote. He said federal cybersecurity rules could provide a level playing field for data breach victims who have fewer protections because they live in a certain state. By 2026, proposed federal rules could require schools with more than a thousand students to report cyber attacks to CISA, but questions remain about what might happen to the rules under the new Trump administration and whether they would come with any accountability for school districts or any mechanism to share those reports with the public.
Corporations that are accused of misleading investors about the extent of cyber attacks and data breaches can face Securities and Exchange Commission scrutiny, yet such accountability measures are missing from public schools. Can face securities and exchange commission scrutiny, yet such accountability measures are missing from public schools. The Family Educational Rights and Privacy Act, the federal student privacy law, prohibits schools from disclosing student records but does not require disclosure when outside forces cause those records to be exposed. Schools having a policy or practice of routinely students' records in violation of FERPA, that's, the Family Education Rights and Privacy Act, can theoretically lose their federal funding, but no such sanctions have ever been imposed since the law was enacted in 1974.
The patchwork of data breach notifications are often the only mechanism alerting victims that their information is out there, but with the explosion of cyber attacks across all aspects of modern life, they've grown so common that some see them as little more than junk mail. Schwartz, the Minnesota law professor, is also a Minneapolis public school's parent. He told the 74. He got the district's September 2023 breach note in the mail but he quote didn't even read it, unquote the vague notices he said are mostly worthless. It may be enforcement against districts' misleading practices that ultimately forces school systems to act with more transparency, said Atai, a data privacy consultant. She urges educators to communicate very carefully, very deliberately and very accurately the known facts of cyber attacks and data breaches. So, leo, this is all a big mess.
2:31:13 - Leo Laporte
Yeah, no kidding.
2:31:15 - Steve Gibson
When an enterprise's security is breached and its proprietary data are leaked, details of its internal operations, employees and customers, as we know, can become public.
2:31:28 - Leo Laporte
I think it has to right. I think the law requires it, does it not Well?
2:31:32 - Steve Gibson
yes, the SEC absolutely requires it, and you know heads will roll among those on the board if that doesn't happen. There isn't the same thing within our educational system. When personal and private records being kept by US public schools are leaked, as now happens with distressing regularity, disclosure of the private and potentially damaging details of our nation's children hangs in the balance. Administrators of these public institutions fear reprisals from the parents of the students that have been placed in their charge and also fear the loss of trust that accompanies any acknowledgement of wrongdoing. So expensive specialist law firms and attorneys are now being brought in under the cover of darkness as a means of abusing the attorney-client privilege. Privacy, shield protections and responsibility is handed over to these attorneys, who are only too happy to take the reins in return for their fat attorney fees. At this point, the school administrators are able to answer any question with. You'll need to speak with our attorneys, since they're conducting an ongoing investigation which, as we saw, can stretch out for more than a year, because, well you know, these things take time. We can't rush these things. We wouldn't want to over-report or under-report.
Meanwhile, insurance companies are working to determine how to best profit from the panic and the threat of ransomware which has been ignited throughout the public school system. On the one hand, they want to write policies and collect their quarterly insurance premiums and on the other hand, they want to minimize and limit their exposure. The ransomware extortionists are able to use the threat of student body private information disclosure to induce the insurers of these school systems to cough up juicy ransom payments. So it's always useful when we're able to examine the facts and find some way to see that things will somehow get better. But I'm at a loss here. As I said at the top, ultimately, taxpayer money is being funneled into the wallets of cyber criminals from insurance companies by way of our nation's public school systems, and I can't see any functional mechanism for holding anyone accountable. So why would we expect any of this to change?
2:34:16 - Leo Laporte
Well, I think you can. You do the same thing the SEC does with public corporations. You do it with schools, with public schools anyway. You can't do it with private schools probably roll. Yes, yeah, you pass a law. This is a data breach and the and this and the subjects of the data breach have the right to know that their information's been compromised so it sounds like next year that there will be some federal legislation that may pass you just need expansive data breach legislation that says anytime there's a data breach, you have two weeks to reveal it to the people who were the subject of the breach.
2:34:54 - Steve Gibson
And you can't leave it up to the states because, as these guys said, it is an absolute disaster patchwork. It's just a quilt of overlapping and contradictory regulations.
2:35:07 - Leo Laporte
Yeah, yeah, but I think you could have a comprehensive federal data breach law. Absolutely, and that's what you need.
2:35:13 - Steve Gibson
And we don't yet.
2:35:14 - Leo Laporte
No, but there are a lot of things Congress needs to do. It's very busy right now and it's got to get to work, so I won't hold my breath for that one. Steve, as always, great show, always interesting, full of good information Kind of a must listen for anybody I almost said for anybody who works in security. But really everybody, everybody should hear. Thank you so much for doing this job.
We do this show every Tuesday right after Mac Break Weekly, which usually comes around 1.30 to 2 pm Pacific, 5 pm Eastern, 2200 UTC. There are eight ways to watch it live. Thanks to the club, you can watch it in ClubTwit, discord if you're a member, that's a great way to watch. Or on YouTube, on Twitch, on Kik, xcom, tiktok, linkedin or Facebook. You take your pick. But honestly, most people don't watch live, because why? Why watch live when you can download a copy of the show and listen at your leisure or watch at your leisure?
Steve has a couple of versions on his website the unique we've mentioned, the 16 kilobit audio version. He also has a 64 kilobit audio version and that's actually a version we stopped putting out, but steve does have that. He has transcripts, human written transcripts from elaine ferris. They're fantastic, great way to read along or to search. Uh, every show. That's been all 1012 shows all there. Uh, he has the show notes there. Uh, grccom now. Now, while you're at GRC, pick up a copy of Spinrite, the world's best mass storage, maintenance, recovery and performance-enhancing utility. Soon we're going to get that DNS Benchmark Pro, which will be great. That'll all be at GRCcom. There's lots of other free stuff there too, including Shields Up.
2:37:02 - Steve Gibson
You were mentioning Elaine and there was something that I couldn't remember that I wanted to mention. That just goes to who she is. She wrote this was last Friday. She said Steve, the podcast has been transcribed, it has been proofread but it's not ready to send because I have a counting problem. Leo speaks 174 times, you speak 172. With Leo beginning and ending, there should be only one digit's difference. I can't make this work out Because she's saying you and I are alternating.
2:37:44 - Leo Laporte
We do alternate Well you speak 174.
2:37:48 - Steve Gibson
She had a count of 172 for me. Either you had to be 173 or I had to be 173. She wouldn't let me have the transcript until she figured out.
2:38:00 - Leo Laporte
How did she figure it out? She just missed something.
2:38:02 - Steve Gibson
I don't know, uh, she, uh, she. She sent at uh, so that was at 7 49, at at 8 16. Uh, she said uh, hi, steve fixed it. Whew, I was so worried about the horses that I couldn't concentrate. Once I gave up, I found the problem. Have a good weekend, elaine so that's a.
2:38:22 - Leo Laporte
That's a sanity check she does in addition to everything else. She counts the number of times each of us speaks and they need to match up. One of us might speak one more time than the other.
2:38:32 - Steve Gibson
Well, never you, always, you always lead off the podcast I start and end, so so you're gonna have one more yeah um speaking actually, you always get the last word.
2:38:43 - Leo Laporte
So it's I get the first word, you get the last word that's it's.
2:38:47 - Steve Gibson
I get the first word, you get the last word. That's right, because I say okay, or I see ya, or something. You say see ya, yeah.
2:38:52 - Leo Laporte
It's so funny. Good for you, elaine. Put a little Easter egg in the transcript this week, just a little something, just to let us know you're there. You can also get everything Steve has at GRCcom, including getting on his mailing list. Now, this actually is a is a two twofer. When you go to grccom slash email, submit your email address. You're then telling steve I'm a real person, I'm not a spammer, and you can now email him with thoughts, questions, pictures of the week, that kind of thing security now at grccom oh, did I say it wrong security.
2:39:28 - Steve Gibson
No, no, no. Oh, that's the email address.
2:39:30 - Leo Laporte
Yes, yeah, yeah, yeah. So yes, from then on, you'll be able to email right now. If you email security now at grccom without validating, right, it goes, disappears completely. Now. The other thing, though, is, when you do that, you'll see a page and there are two boxes unchecked because it's opt-in for his newsletters. So if you want to get the show notes, for instance, ahead of time, you can. The show notes are also available for download at grccom.
We have 128 kilobit audio version. No one knows why, but we do. Uh, I don't know why is it stereo? I don't understand. It's very high quality. We also have a video version. Those are at our website, twittv slash sn another place you can download. If you go to that page, you'll also see a link to the youtube channel.
That's the way we encourage you to share clips from the show. If you have a friend that you want to send you know. Hey, did you see the new uh ai integration in firefox 135? You could just take that you want to send you know. Hey, did you see the new AI integration in Firefox 135? You could just take that little clip and send it to her, and that would be a cool way of sharing the show. Makes it very easy to do that on YouTube. They have the easiest clipping thing ever.
But the best way to get it probably subscribe in your favorite podcast client, because that way you can be able to get it right away, the minute it's available, without thinking about it. So you always have the next security now ready, queued up on your device. You have a choice between audio and video. All of that, the information for all that, is at twittv slash. Yes, and I think that's everything I'm taking tomorrow off. I won't be here tomorrow, but I will be be back on Sunday for this Week in Tech, and Steve and I will be back next Tuesday for another gripping edition, episode 1013, of Security Now 1013.
2:41:13 - Steve Gibson
It is. See you then, my friend.
2:41:17 - Leo Laporte
Bye.