Sohliloquies

Logo

I'm a security engineer with a lot of side projects, most of which find their way here. I like to center my research around computer security, cryptography, Python, math, hard problems with simple answers, and systems that uphold their users' values.

You can also find me on Twitter.

1 September 2015

What Does the Internet of Things Mean for Security?

We are standing at the edge of a steep hill full of very sharp rocks, and internet-connected hardware manufacturers are trying to push us over.

Imagine you woke up one day to find out that overnight you lost access to every account you use online -- Facebook, Twitter, Gmail, you name it. Worse, because all your password resets ran through your Gmail account, there's no easy way to get these accounts back. Now imagine that all of this happened because you bought the wrong fridge.

This sounds like an absurd thought experiment. It isn't. For those few who're unlucky enough to own the vulnerable model of 'smart' fridge, this could actually happen. This is reality. It's made by Samsung and costs about $3600. If an attacker can get within radio distance of the fridge, they could take over the fridge owner's Google account without breaking a sweat.

Once we've got that (imagining ourselves now in the attacker's shoes, taking advantage of some poor sap who dropped close to four grand on an absurd fridge), we can pull up their other accounts, hit that big fat "password reset" button, and get a link delivered to your new mailbox inviting you to set the account's password to whatever you like. Note that the strength of the original passwords has no bearing on whether this attack can work.


I don't mean to rag on Samsung here -- they're not much worse than any of the other major players, really. And I'm sure this problem will be patched soon. To Samsung's modest credit, it seems like the fridge at least has better security around its self-update subsystem than around the rest of its systems. An attacker could take over your life, but at least they couldn't install a rootkit in your fridge, probably.

But the issue here isn't this specific threat. It's the fact that we're moving towards a world where threats like this one are normal. After all, these vulnerabilities are already commonplace -- I only named one, but there's way more where that came from, and you'd best believe that black-hats are taking note. Even Bruce Schneier, who is famously levelheaded and arguably the best source for security commentary around, is raising the alarm.

(Footnote: Schneier might fault me for scaring you a few paragraphs ago with a "movie-plot threat", a threat that's compelling for the same reason that it's irrelevant: its specificity. The thing is, we humans are real bad at judging risks if we can't tie them to concrete sequences of events, so I opted to throw out a concrete example of how bad things can get, to make sure I got your attention. Even if that specific attack on that specific product will soon be gone for good, you can bet that we're going to be seeing a lot more like it in the days and years to come.)

It's tempting to dismiss these concerns by saying that we should focus on the problems we face in the present moment, rather than on fretting about the future. Brian Krebs seems to be of this opinion ("from where I sit, the real threat is from The Internet of Things We Already Have That Need Fixing Today.") I don't disagree, for the most part: we have a lot of things that need fixing and need fixing now. We're not doing a very good job of fixing them. We desperately need to figure out what we're going to do about this. But, it's hard to think of a single problem we face today which the Internet of Things is not going to make much, much worse.

Why? Well, I'm not here to give a history lesson, but some background might help here. Let's make this fast.

The old guard tell stories of when the internet was yet young and all you needed to be somebody was a terminal and a helpful diagram explaining that the call stack grows downwards, and here's where it stores return addresses, and here's a quick reference for writing shellcodes, and hey look you have a cool toy. One beautiful thing about this time, enshrined both in hacker culture and in the wide range of media that tried (with varying levels of success) to cash in on said culture, was that the internet acted as an equalizer. Power was in the hands of individuals, allocated in direct proportion to their passion, skills, and contacts. The L0pht famously claimed they could've shut down the internet in 30 minutes, and I'll bet they could've.

At a certain point, governments around the world realized they weren't sure how they felt about this. Lots of money went into developing states' proficiencies in the field of "cyber". They occasionally ran into big problems (strong public-key crypto and onion routing, for example -- topics on which they remain bitter to this day), but for the most part they were able to quickly and quietly become powerful players, unnoticed only because they chose not to publicize their abilities.

The NSA in particular is now quietly infamous in the culture for recruiting gobs of up-and-coming security or crypto talent. One would imagine that their contemporaries (GHCQ, Mossad, FSB/SVR RF, etc) are similarly enthusiastic. It's gotten to the point where some commentators suggest, only half in jest, articulating one's threat model in terms of "Mossad or not-Mossad" (feeling free, of course, to substitute any other intelligence org to taste).

Why is this relevant? Because there is a certain asymmetry here, and it's about to get a whole lot worse. The asymmetry is that organizations like the NSA or its peers are able to throw a whole lot of person-power at the issue of finding security bugs in software. A whole lot more than one can really expect to see in the public security community. I don't mean that as a criticism of us who work in the open: rather, it's a testament to just how much you can get done when you hire a lot of smart people and give them lots of money to do this one very specific task.

Anyone who's played black-hat before, who's sat on or watched a red team, or who's even done a postmortem security audit, can tell you that the first step to owning a network is owning one box, and that once you have that first box, the avenues of attack available to you increase exponentially. If we assume a world in which IoT devices are prevalent and insecure (reasonable assumptions both), then for any practical network penetration, an IoT device is almost certainly your first stop, and a huge advantage goes to attackers with prior knowledge of extant vulnerabilities in whatever devices the poor target may happen to own. Due to their aggressive recruitment of talent and vast amounts of classified research, the NSA & co already have the upper hand when it comes to hacking (if you need convincing, just look up Tailored Access Operations). Increasing attack surface in the way IoT prevalence would doesn't just multiply their advantage: it exponentiates it.

We live in a world where national intelligence agencies have powers that are hard to believe. We're on the verge of a world in which their powers could literally defy belief. I don't know about you, but that's sure not something I'm comfortable with.

But it's not just these agencies whose lives would be made easier by the mass proliferation of Internet of Things devices. It'd be a boon to organized crime, too, as the Ars Technica link given earlier outlined. If you own or want to start a botnet, the Internet of Things is your wildest dream come true. Discover one obscure exploit, and you could add thousands of devices to your horde almost instantly. Pretty soon that'll be tens of thousands. Enterprising black-hats could even trivially slap together a masscan-like tool to look for & log addresses of known IoT devices, keeping headcounts for each device. Then you know which devices to spend your time on, and whenever you uncover a 0-day exploit you have a list of vulnerable Things ready to go.

I personally know a number of people who like to roll out the old argument that they're secure because they're boring. "Why would anyone want to take the time to hack little old me?" This is a very weak argument. Botnets don't care how interesting you are; they care how many cores your processor has. Automated scans find everyone, interesting and boring alike, and the same goes for automated mass-scale exploit deployment.

And what are the odds of the user noticing if their internet-connected robot vacuum gets compromised? Pretty low -- we simply don't pay as much attention to these devices as we do to desktop or laptop computers. Why would you spend more than a few seconds at a time interfacing with your smart calendar? Would you notice if its interface latency is a bit higher than usual? Say your internet-enabled fridge/toaster/TV/router/thermostat/lamp gets hacked. If 90% of its idle compute power was put towards (say) bitcoin mining, you probably wouldn't even notice. None of us would. If you think that isn't a big deal, you aren't thinking hard enough.

So, what do we do? We have a problem, sure, but what about solutions? I'm glad you asked. The first, most obvious step is just to not buy this crap (here is as good a place as any to leave the obligatory s/o to Twitter masterpiece Internet of Shit). I mean, really, who actually wants a web-enabled pill dispenser? Or the ability to order a pizza by beating up your house? Or a giant ugly white cylinder that apparently waters your plants and needs to be connected to the internet to do this? Buy that, and hackers don't have to stop at stealing your identity, spying on you remotely 24/7, profiting off your compute cycles, and using your fridge in DDoS attacks -- they can drown your garden, too!

Okay, maybe that's getting a little carried away. But still, when you're trying to secure a networked device, the first thing you want to do is minimize the attack surface -- the number of exposed services on that device that attackers can interact with. The idea being, the fewer things you have exposed, the fewer things there are to attack, and the lower your chances of getting hit with an exploit you didn't know existed. That's the mindset that security people tend to bring to this issue, and it is the exact polar opposite of the Internet of Things philosophy. The more Things you've exposed to the internet, via your local network, the more goodies you're exposing to malicious external threats, and the more potentially untrustworthy devices you're therefore surrounding yourself with.

If you aren't content with just staying away, refusing to go anywhere near these things, then you can push for product security audits. How hard would it be for manufacturers, before shipping their buggy, insecure product, to contract a couple of security professionals to audit the system and find any gaping holes? Not very hard, but it'd cost money, and in this world, that's a problem -- one that we can only address through public pressure.

Tech companies have a long and shameful history of viewing security bugs as unimportant or even as a net loss to fix. The prevailing "wisdom" is that since audits cost money and most users supposedly don't care about security, it's cheaper to just ship whatever you have and deal with issues when they arise, if they arise. This is the sort of logic that led, way back when, to the anarchic culture of full disclosure (this link, a blog post on disclosure policies by one of the greats, is highly recommended reading). The general idea is that the only reliable way to get companies to take action is to give them a financial incentive, and the best/fastest way to do that is to light a fire under them.

Whenever a company has a high-profile vulnerability disclosed, they lose a bit of credibility. The longer that vuln goes unpatched, the more cred they lose. Thus, it's in their best interest to patch as soon as possible after disclosure. This works, almost, sometimes, and it's given rise to any number of compromise policies, such as partial disclosure. None of them seem to work much better, and if someone tells you that they like this mess that we call a system, it's a safe bet that they're either crazy or lying.

As the IoT becomes more prevalent, more manufacturers are going to have to ask themselves how they want to handle security, and my hope is that we're going to see a movement of big names getting proactive and investing up front to earn some sort of reputable seal of approval.

One advantage of auditing before shipping is that it sidesteps the issue of updates. One a system is compromised, it's up to the attacker whether it gets updates or not, and you can guess what option they pick. Even in uncompromised systems, updating can be hard or impossible, as Schneier described in his article cited above (WIRED link).

One company that's taking admirable steps towards proactive auditing is Tesla. They've decided to one-up bug bounty programs by actually making job offers to hackers who find new vulnerabilities in the Model S. Other companies in similar situations would do well to follow their example. It's hard to imagine the adoption of programs like this on any wide scale, though, unless the companies involved see it as a strongly favorable PR move. This in turn requires a shift in public perception, and that's where we, as members of the public, can actually play some role.

What else is there to do? I'd suggest poking holes in IoT hardware to make a point, but that usually requires buying it. Instead, for the sufficiently savvy, what about joining the conversation on how to to make more secure Things? Currently, most big-time IoT products come from famous hardware companies trying to cash in on their manufacturing chops by putting chips in everything and from entrepreneurs looking for easy hardware startup ideas, when really it's the Maker movement that should be (and--in places--is) embracing it.

It's not that difficult to lock down a Raspberry Pi (use good passwords, disable ssh, etc) or an Arduino (look out for buffer overflows & co, maybe run your code by a friend). So, if you really can't stop thinking about having a toilet that tweets or a web-enabled toaster, why not try to make your own? Shit, if it's good enough, you could even head to Kickstarter and join those annoying hardware startup people. Just remember when your funds raised double your stretch goal, some of that spare change could go towards hiring some security people to take a long hard look at your product before it ships.