Webkit.org: What Spectre and Meltdown Mean for Webkit. Detailed technical explanation of how the Spectre attack reads system memory it’s not allowed to, and the changes Webkit is making to address the problem. This is important given the foundational position Webkit holds on the web—it impacts Safari on iPhone and iPad, Safari, the Apple Watch, and the built in browsers on thousands of iOS applications.
Category: Security
Hacking
Today was the first day (the “unclosing”) of Veracode’s semiannual Hackathon. In ordinal numbers, this is the eleventh one we’ve done, though the actual name is Hackathon 10 5/7. (It’s ok; we’re all mad here.)
I am looking back at all the hacks I’ve done over the last few years and it’s fascinating what they reveal. Programming hacks, though I haven’t been a professional developer in 17 years. Musical hacks, though I’m usually neither a bluegrass musician (though I am when our CFO is leading the band) nor a theremin player. Locksmith hacks. (Though I have a favorite locksmith in DC NE. Presentation hacks. Writing hacks.
I think what’s fascinating about the way that Veracode does Hackathons is that it’s an opportunity for us all to reach deep and explore some under-exercised facet of our true selves. Or failing that, to sew one on and see if we can make it thrive.
We are the champions
It continues to be challenging for me to write much right now, partly because there is so much going on. One thing that happened today: I gave a webinar on how to handle the shortage of information security people when you’re trying to build more secure software. (The answer: you grow your own.)
The recording isn’t live yet, but I posted the slides and opened a discussion thread on Peerlyst for those interested.
What I do: BSides talk
I talk sometimes about my job on this blog, but I’ve never shared myself talking as part of my job before. That changes with this post!
This is my conference talk at BSides SF 2017 about the changes in the application development landscape and how application security changes as a result.
What I’ve been up to
I keep missing blogging days, but not because things aren’t busy. Here’s a roundup of places where I’ve been talking in the press and other stuff for the past few months:
On the Veracode blog: Regulations like FS-ISAC and PCI are now looking at the security of open source components, are you ready?. Plus a three part series on the ransomware attack against the San Francisco MUNI and software composition analysis (one, two, three).
In the press:
- ThreatPost, Code reuse a peril for secure software development.
- Internet Retail, Holiday hangover: Those temporary web pages pose a security risk.
- Wall Street Journal, Pressured App Developers More Likely to Forget Security.
- Tech Beacon, The state of software security: 5 things developers can do now.
- CyberParse, Heartbleed persists on 200,000 servers, devices.
- SD Times, Security in software needs to be Job One.
And it looks like this year’s RSA will be pretty busy in a few weeks. It’s unfortunate that I haven’t wanted to write much about other things recently, but work is definitely making up for it.
In the press
I was featured in a UK article about the conference that I spoke at in Bristol a few weeks ago.
Recent writing elsewhere
I’ve written a series of blog posts on the Veracode blog about application security. Check them out, if that sort of thing floats your boat, or if you just want to see what’s up in my professional life.
Note that I don’t generally write my own headlines, so I don’t claim responsibility for clickbaityness or comma splices. 🙂
What is free?
My company, Veracode, published our most recent State of Software Security Report yesterday (disclaimer: I’m one of the authors). The report mines data from hundreds of thousands of application scans to paint a picture of the risk profile of software.
This year we included data on risk from open source components. The idea is that it’s common, especially in Java development but also in Javascript, Python, PHP and other languages, to use libraries and frameworks that were developed by the open source community for certain foundational parts of the application’s functionality. Why write a new object persistence layer (to pick one example) when you could simply use a free off-the-shelf one and focus on writing the actual behavior of the application?
Turns out there’s one major issue with this approach: all software, even open source software, is buggy, and some of those bugs are vulnerabilities: they can be exploited to compromise the confidentiality or integrity of the data the application accesses, or impair the availability of the application itself. And widely shared components create a big target of opportunity for attackers, who can focus on finding vulnerabilities in the shared components for a payoff of attacking hundreds or thousands of applications.
The open source community generally stays on top of fixing these vulnerabilities as they’re discovered. Look at any popular Java framework like Struts or Spring—you’ll see dozens or hundreds of point releases fixing all sorts of defects, including security vulnerabilities. So what’s the problem?
The problem is that developers don’t upgrade to newer versions of the components they use. From the developer’s perspective, there’s almost zero benefit, and a high downside, to a component upgrade: it takes time out from developing features that the business has asked for, and there’s a non-zero risk that upgrading the component will break functionality in the application. From their perspective, the possibility of a hack via the component is remote, so the upgrades don’t get done.
This attitude makes sense in the short term, but in the long term is fatal for security. Because vulnerabilities do get found in older components. The best description I’ve heard of this phenomenon comes from Josh Corman (who says he heard it from someone at Microsoft): “Software doesn’t age like wine, it ages like milk.” As developers widely adopt components, the attack surface for newly discovered vulnerabilities in those components becomes broad indeed.
It’s not open source’s fault, but I do think it reflects a misunderstanding of the cost/benefit analysis for using open source. Yes, open source is free of commercial licensing fees, but it is not free of downstream maintenance costs. It’s as if someone gave you a car. Just because it’s free doesn’t mean you don’t have to periodically change the oil.
Likewise, developers who adopt open source components should set expectations with the business that they’ll need to reserve some of their development time for basic maintenance, like component upgrades. Doing so proactively helps improve predictability—and avoid the likelihood of having to do an emergency update that disrupts the roadmap.
Security Goofus and Gallant
Gallant turns on opt-in end to end encryption in its flagship Messenger service.
Goofus builds a tool to search for CIA-provided keywords in its users’ email, then exfiltrates email with matching strings to an insecure externally facing intercept location—all without informing its CISO, who subsequently resigns.
Two views of cybersecurity cost and return
Two different reports came out in the last 24 hours about the costs and investments required for cybersecurity. The first, a paper from the RAND Institute’s Sasha Romanosky, claims that, on average, breaches only have a modest financial impact to organizations—but also notes that the real costs are mostly not born directly by the corporation:
while the potential for greater harm and losses appears to be increasing in time, evidence suggests that the actual financial impact to firms is considerably lower than expected. And so, if consumers are indeed mostly satisfied with firm responses from data breaches, and the costs from these events are relatively small, then firms may indeed lack a strong incentive to increase their investment in data security and privacy protection. If so, then voluntary adoption of the NIST Cybersecurity framework may prove very difficult and require additional motivation.
Bruce Schneier interprets this as meaning that there is a market failure requiring government intervention. That’s certainly one way to view it.
Another perspective: it’s a good idea to lower the cost of defending against breaches. That’s what is suggested by the second article, a study funded by my employer Veracode and conducted by Wakefield Research called “Bug Bounty Programs Are Not a Quick-Fix.” The research found that 83% of respondents released software without testing for or fixing software vulnerabilities; 36% use bug bounty programs; 93% believe that most flaws found in bug bounty programs could have been found and fixed by developer training or testing in the development phase, which 59% believe would be more cost effective.
On the airing of security grievances
I had a great day yesterday at DevOpsDays NYC. I gave a talk, but I also learned a lot from the other speakers and from the conversations. The format of DevOpsDays is half traditional conference with speakers, half “unconference” with open proposals of discussion topics and voting to establish which topics go where. They call it Open Space, and it’s a very effective way to let attendees explore the conversations they really want to have.
I proposed an Open Space topic on the “airing of grievances” around information security. What emerged was really interesting.
Attendees talked about companies that confused compliance and security, with disastrous results (hint: just because your auditor counts you compliant if you have a WAF with rules, doesn’t mean that those rules are actually protecting you from attack).
We talked about advances in declarative security, in which you could specify a policy for which ports should be open and closed via tools like Inspec.
We talked about the pains of trying to integrate legacy appsec tools into continuous integration pipelines (which happened to be the subject of my talk). I heard about people trying to integrate on-premise static analysis tools into their Jenkins toolchains when the application caused the scanner to exhaust all the memory on the machine. About on-premise dynamic scanners that run for eight hours. About the challenges of determining if an attack has successfully made it past a web application firewall.
And then Ben Zvan said (and I paraphrase), “We have a man-in-the-middle firewall (proxy) between our desktop network and the Internet that screws with security certificates, so I can’t use services that rely on certs for secure communication.”
And the floodgates opened. I talked about the secure mail gateway, intended to prevent phishing, that pre-fetches links in emails and thereby breaks one-time-use links intended for secure signup to new services. We talked about endpoint protection tools that can’t keep up with the MacOS update schedules, and thus make the user choose between taking the OS update and having the endpoint protection tool break, or not taking it and remaining at risk of exploitation of a dangerous newly-announced vulnerability.
The conclusion that we reached is that it’s a deadly dangerous irony that security tools actively stomp on security features, but it’s also the new reality. The complexity of the information security toolstack increases every year, with more and more vendors entering the space and CISOs being forced to become system integrators and figure out which tools conflict with which.
The lesson is clear: If security solution providers are serious about security, they need to build for reduced complexity.
Maps of new territories
DevOps Topologies: considering all the ways that development and operations organizations can work together (or separately) and what’s wrong (or right) about each option.
The myth of fingerprints
InfoWorld (Chris Wysopal): Election system hacks: we’re focused on the wrong things. Chris (who cofounded my company Veracode) says that we should stop worrying about attribution:
Most of the headlines about these stories were quick to blame the Russians by name, but few mentioned the “SQL injection” vulnerability. And that’s a problem. Training the spotlight on the “foreign actors” is misguided and, frankly, unproductive. There is a lot of talk about the IP addresses related to the hacks pointing to certain foreign entities. But there is no solid evidence to make this link—attribution is hard and an IP address is not enough to go on.
The story here should be that there was a simple to find and fix vulnerability in a state government election website. Rather than figuring out who’s accountable for the breach, we should be worrying about who is accountable for putting public data at risk. Ultimately, it doesn’t matter who hacked the system because that doesn’t make the vulnerabilities any harder to exploit or the system any safer. The headlines should question why taxpayer money went into building a vulnerable system that shouldn’t have been approved for release in the first place.
I couldn’t agree more. In an otherwise mediocre webinar I delivered in June of 2015 on the OPM breach, I said the following:
After a breach there are a lot of questions the public, boards and other stakeholders ask. How did this happen? Could it have been prevented? What went wrong? And possibly the most focused on – who did this?
It is no surprise that there is such a strong focus on “who”. The media has sensationalized stories about Anonymous and their motives as well as the motives of cyber gangs both domestic and foreign. So, instead of asking the important questions of how can this be prevented, we focus on who the perpetrators may be and why they are stealing data.
It’s not so much about attribution (and retribution)…
…it’s about accepting that attacks can come at any time, from anywhere, and your responsibility is to be prepared to protect against them. If your whole game plan is about retribution rather than protecting records, you might as well just let everyone download the records for free.
So maybe we should stop worrying about which government is responsible for potential election hacking, and start hardening our systems against it. Now. After all, there’s no doubt about it: it’s the myth of fingerprints, but I’ve seen them all, and man, they’re all the same.
Smart thermostats, dumb market
One of the things I’ve been theoretically excited about for a while in iOS land is the coming of HomeKit, the infrastructure for an Internet of Things platform for the home that includes standard controller UI and orchestration of things like smart thermostats, light bulbs, garage door openers, blinds, and other stuff.
I’ve been personally and professionally skeptical of IoT for a while now. The combination of bad UX, poor software engineering, limited upgradeability, and tight time to market smells like an opportunity for a security armageddon. And in fact, a research paper from my company, Veracode, suggests just that.
So my excitement over HomeKit has less to do with tech enthusiast wackiness and more to do with the introduction of a well thought out, well engineered platform for viewing and controlling HomeKit, that hopefully removes some of the opportunities for security stupidity.
But now the moment of truth arrives. We have a cheap thermostat that’s been slowly failing – currently it doesn’t recognize that it has new batteries in it, for instance. It only controls the heating system, so we have a few more weeks to do something about it. And I thought, the time is ripe. Let’s get a HomeKit-enabled thermostat to replace it.
But the market of HomeKit enabled thermostats isn’t very good yet. A review of top smart thermostat models suggests that Nest (which doesn’t support HomeKit and sends all your data to Google) is the best option by far. The next best option is the ecobee3, which does support HomeKit but which is $249. And the real kicker is that to work effectively, both require a C (powered) wire in the wall, which we don’t have, and an always on HomeKit controller in the house, like a fourth generation Apple TV, to perform time-based adjustments to the system.
So it looks like I’ll be investing in a cheap thermostat replacement this time, but laying the groundwork for a future system once we have a little more cash. I wanted to start working on the next-gen AppleTV soon anyway. Of course, to get that, I have to have an HDMI enabled receiver…
Recovering passcodes from iPhone 5c without a backdoor
Bruce Schneier: Recovering an iPhone 5c Passcode. Remember when the FBI insisted that Apple needed to backdoor iOS because otherwise recovering the passcode and accessing the phone of the San Bernardino killer was completely impossible? Good times.