In another episodic CPU security vulnerability announcement, recent reports show that Intel faces another version of the Zombieload vulnerability, part of a class of vulnerability referred to as MDS attacks. These are evolutions of the Spectre, Meltdown, and Foreshadow vulnerabilities that reared their ugly heads in 2018 – all of which had potential to leak data via CPU cache. These more recent MDS attacks maliciously obtain data from the CPU’s microarchitecture data structures.
Despite their cute-sounding names, these attacks are scary because they obtain data across process boundaries and even across trusted execution environments (read: even across virtual machine boundaries). This means bad actors can potentially acquire information from any multi-tenant architecture where different principals share of the underlying physical CPU (think Cloud, Virtualization, and Containerization).
Rather than focus on the scary aspects of the vulnerabilities themselves – there are plenty of reports that cover that angle – I would like to take a moment to reflect on the underlying reason for these vulnerabilities. It is a bit esoteric, but stick with me. It is the insatiable desire for more computing speed that is at the heart of these vulnerabilities.
We all want faster processing to increase processing throughput, crunch those pixel calculations faster for our video games, render web pages more responsively, perform increasingly complex cryptographic operations faster, handle our spreadsheets, word processor, and layout applications faster, and so on. This has driven CPU manufacturers to become incredibly clever at not only fitting more transistors on a chip and driving them at higher clock speeds, but also at employing clever algorithms for executing series of instructions and caching memory access. As I was pursuing my Ph.D. degree in graduate school, I was fascinated by those geeky topics like branch prediction, multiple instruction pipelines, speculative execution, re-order buffers, cache algorithms, and all those cool techniques that chipmakers employ in their microarchitecture to eke out small percentage increases in performance.
At what expense, though, is this speed? Well, security, it seems. CPU (micro)architects have not had to think like security analyst and have developed their clever speed tricks in a vacuum devoid of malicious actors. Surely this will change as things like recent vulnerabilities come to light, but what are we, as a society, going to do if the answer is to remove these clever tricks and stick with linear execution of instructions with minimal speedups? Are we willing to suppress our hunger for speed in favor of keeping our data more secure? Are we going to revisit our thoughts on sharing compute resources across multi-tenant CPUs?
I think that we face interesting times, and it is important to consider not only what the CPU manufacturers are doing to address these discovered vulnerabilities, but also to think on what our business and personal priorities are to avoid the risks. What is the right balance between speed and safety? It is an age-old question, and we would be wise to keep it in mind.
Sign up once to download literature