[ad_1]
Several weeks ago, the Linux community was rocked by the disturbing news that researchers at the University of Minnesota had developed (but, in fact, not fully executed) a method to introduce what they called “pledges.” hypocrites “in the kernel the idea being to diffuse behaviors that are difficult to detect, meaningless in themselves, which could then be aligned by attackers to manifest vulnerabilities.
This was quickly followed by the – in some senses, equally disturbing – news that the university had been banned, at least temporarily, from contributing to the development of the core. A public apology from the researchers followed.
While the development and disclosure of exploits is often messy, running technically complex “red team” programs against the world’s largest and most important open source project seems a bit more. It’s hard to imagine researchers and institutions so naive or abandoned that they fail to understand the potentially enormous explosion radius of such behavior.
Equally certain, the maintainers and governance of the project have a duty to apply the policy and avoid wasting their time. Common sense suggests (and users demand) that they strive to produce kernel versions that do not contain exploits. But killing the messenger seems to miss at least part of the point – that it was research rather than pure malice, and that it highlights some kind of software (and organizational) vulnerability that requires technical and systemic mitigation. .
Projects of the scale and extreme criticality of the Linux kernel are unprepared for the game-changing hyper-scale threat models.
I think the hypocrite commits setback is symptomatic, on all sides, of related trends that threaten the entire extended open-source ecosystem and its users. This ecosystem has long struggled with issues of scale, complexity and the increasingly critical importance of free and open source software (FOSS) to any type of human enterprise. Let’s look at this complex of problems:
-
Bigger open source projects now have big targets.
-
Their complexity and pace have outgrown the scale that traditional “commons” approaches or even more evolved governance models can cope with.
-
They evolve to haggle over each other. For example, it is becoming increasingly difficult to say categorically whether “Linux” or “Kubernetes” should be treated as the “operating system” for distributed applications. For-profit organizations took note and began to reorganize around “full-stack” portfolios and narratives.
-
In doing so, some for-profit organizations began to distort traditional models of participation in free and open source software. Many experiments are in progress. Meanwhile, funding, staff commitments to free software and other measures appear to be declining.
-
OSS projects and ecosystems adapt in a variety of ways, making it sometimes difficult for for-profit organizations to feel at home or to benefit from participation.
Meanwhile, the threat landscape continues to evolve:
-
Attackers are bigger, smarter, faster, and more patient, resulting in long games, subversion of the supply chain, and more.
-
The attacks are more profitable financially, economically and politically than ever.
-
Users are more vulnerable, exposed to more vectors than ever before.
-
The growing use of public clouds is creating new layers of technical and organizational monocultures that can allow and justify attacks.
-
Complex out-of-the-box commercial solutions (COTS) assembled in part or entirely from open source software create elaborate attack surfaces whose components (and interactions) are accessible and well understood by the wrong actors.
-
The software component enables new types of supply chain attacks.
-
Meanwhile, all of this is happening as organizations seek to shed their non-core expertise, shift capital spending to operating expenses, and scale to depend on cloud providers and other entities to perform. the hard work of security.
The net result is that projects of the scale and extreme criticality of the Linux kernel are not ready for the game-changing hyper-scale threat models. In the specific case we’re looking at here, researchers were able to target candidate incursion sites with relatively little effort (using static analysis tools to assess units of code already identified as needing contributor attention) , suggest “fixes” informally via email, and take advantage of many factors, including their own established reputation as reliable and frequent contributors, to bring exploit code to the verge of validation.
This was a serious betrayal, effectively committed by “insiders” of a trusted system that historically has worked very well in producing robust and secure kernel versions. Breach of trust itself is a game-changer, and the implicit follow-up requirement – building mutual human trust with systematic mitigation measures – looms large.
But how to deal with such threats? Formal verification is indeed impossible in most cases. Static analysis may not reveal cleverly designed forays. Project rhythms need to be maintained (there are known bugs to fix, after all). And the threat is asymmetric: as the classic line says, the blue team must protect themselves against everything, the red team only needs to be successful once.
I see some remedial possibilities:
-
Limit the spread of monocultures. Stuff like Alva Linux and AWS’s open ElasticSearch distribution are good, partly because they keep the widely used FOSS solutions free and open source, but also because they inject technical diversity.
-
Re-evaluate the governance, organization and financing of the project with a view to alleviating the total dependence on the human factor, as well as encouraging for-profit companies to contribute their expertise and other resources. Most for-profit companies would be happy to contribute to open source because of its openness, not in spite of it, but in many communities it may require a culture change for existing contributors.
-
Speed up merchandising by simplifying the stack and verifying components. Push the appropriate responsibility for security into the application layers.
Basically what I’m advocating here is that orchestrators like Kubernetes should matter less and Linux should be less impactful. Finally, we need to move as quickly as possible towards formalizing the use of things like unikernels.
Either way, we need to make sure that businesses and individuals provide the resources that open source needs to keep going.
[ad_2]
Source link